text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Software quality control is the set of procedures used by organizations to ensure that a software product will meet its quality goals at the best value to the customer, and to continually improve the organization’s ability to produce software products in the future.
Software quality control refers to specified functional requirements as well as non-functional requirements such as supportability, performance and usability. It also refers to the ability for software to perform well in unforeseeable scenarios and to keep a relatively low defect rate.
These specified procedures and outlined requirements lead to the idea of Verification and Validation and software testing.
It is distinct from software quality assurance which encompasses processes and standards for ongoing maintenance of high quality of products, e.g. software deliverables, documentation and processes - avoiding defects. Whereas software quality control is a validation of artifacts compliance against established criteria - finding defects.
== Definition ==
Software quality control is a function that checks whether a software component, or supporting artifact meets requirements, or is "fit for use". Software Quality Control is commonly referred to as Testing.
== Quality Control Activities ==
Check that assumptions and criteria for the selection of data and the different factors related to data are documented.
Check for transcription errors in data input and reference.
Check the integrity of database files.
Check for consistency in data.
Check that the movement of inventory data among processing steps is correct.
Check for uncertainties in data, database files etc.
Undertake review of internal documentation.
Check methodological and data changes resulting in recalculations.
Undertake completeness checks.
Compare Results to previous Results.
== Software Control Methods ==
Rome laboratory Software framework
Goal Question Metric Paradigm
Risk Management Model
The Plan-Do-Check-Action Model of Quality Control
Total Software Quality Control
Spiral Model Of Software Developments
Control management tool
== Verification and validation ==
Verification and validation assure that a software system meets a user's needs.
Verification: "Are we building the product right?" The software should conform to its specification.
Validation: "Are we building the right product?" The software should do what the user really requires.
Two principal objectives are:
Discovery of defects in a system.
Assessment of whether the system is usable in an operational situation.
== Verification and Validation of Methods ==
Independent Verification and Validation (IV&V)
Requirements Traceability Matrix (RTM)
Requirements Verification Matrix
Software Quality Assurance
== Testing ==
Unit testing
Functional testing
Integration testing
System testing
Usability testing
Software performance testing
Load testing
Installation testing
Regression testing
Stress testing
Acceptance testing
Beta testing
Volume testing
Recovery testing
== See also ==
Software quality management
Software quality assurance
Verification and Validation (software)
Software testing
== References ==
Wesselius, Jacco, "Some Elementary Questions on Software Quality Control"
https://web.archive.org/web/20071023034030/http://satc.gsfc.nasa.gov/assure/agbsec5.txt
== External links ==
Software Engineering Body of Knowledge Ch. 11 Sec. 2.1 | Wikipedia/Software_quality_control |
In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that are too large to fit into a computer's main memory at once. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory (auxiliary memory) such as hard drives or tape drives, or when memory is on a computer network. External memory algorithms are analyzed in the external memory model.
== Model ==
External memory algorithms are analyzed in an idealized model of computation called the external memory model (or I/O model, or disk access model). The external memory model is an abstract machine similar to the RAM machine model, but with a cache in addition to main memory. The model captures the fact that read and write operations are much faster in a cache than in main memory, and that reading long contiguous blocks is faster than reading randomly using a disk read-and-write head. The running time of an algorithm in the external memory model is defined by the number of reads and writes to memory required. The model was introduced by Alok Aggarwal and Jeffrey Vitter in 1988. The external memory model is related to the cache-oblivious model, but algorithms in the external memory model may know both the block size and the cache size. For this reason, the model is sometimes referred to as the cache-aware model.
The model consists of a processor with an internal memory or cache of size M, connected to an unbounded external memory. Both the internal and external memory are divided into blocks of size B. One input/output or memory transfer operation consists of moving a block of B contiguous elements from external to internal memory, and the running time of an algorithm is determined by the number of these input/output operations.
== Algorithms ==
Algorithms in the external memory model take advantage of the fact that retrieving one object from external memory retrieves an entire block of size B. This property is sometimes referred to as locality.
Searching for an element among N objects is possible in the external memory model using a B-tree with branching factor B. Using a B-tree, searching, insertion, and deletion can be achieved in
O
(
log
B
N
)
{\displaystyle O(\log _{B}N)}
time (in Big O notation). Information theoretically, this is the minimum running time possible for these operations, so using a B-tree is asymptotically optimal.
External sorting is sorting in an external memory setting. External sorting can be done via distribution sort, which is similar to quicksort, or via a
M
B
{\displaystyle {\tfrac {M}{B}}}
-way merge sort. Both variants achieve the asymptotically optimal runtime of
O
(
N
B
log
M
B
N
B
)
{\displaystyle O\left({\frac {N}{B}}\log _{\frac {M}{B}}{\frac {N}{B}}\right)}
to sort N objects. This bound also applies to the fast Fourier transform in the external memory model.
The permutation problem is to rearrange N elements into a specific permutation. This can either be done either by sorting, which requires the above sorting runtime, or inserting each element in order and ignoring the benefit of locality. Thus, permutation can be done in
O
(
min
(
N
,
N
B
log
M
B
N
B
)
)
{\displaystyle O\left(\min \left(N,{\frac {N}{B}}\log _{\frac {M}{B}}{\frac {N}{B}}\right)\right)}
time.
== Applications ==
The external memory model captures the memory hierarchy, which is not modeled in other common models used in analyzing data structures, such as the random-access machine, and is useful for proving lower bounds for data structures. The model is also useful for analyzing algorithms that work on datasets too big to fit in internal memory.
A typical example is geographic information systems, especially digital elevation models, where the full data set easily exceeds several gigabytes or even terabytes of data.
This methodology extends beyond general purpose CPUs and also includes GPU computing as well as classical digital signal processing. In general-purpose computing on graphics processing units (GPGPU), powerful graphics cards (GPUs) with little memory (compared with the more familiar system memory, which is most often referred to simply as RAM) are utilized with relatively slow CPU-to-GPU memory transfer (when compared with computation bandwidth).
== History ==
An early use of the term "out-of-core" as an adjective is in 1962 in reference to devices that are other than the core memory of an IBM 360. An early use of the term "out-of-core" with respect to algorithms appears in 1971.
== See also ==
Cache-oblivious algorithm
External memory graph traversal
Online algorithm
Parallel external memory
Streaming algorithm
== References ==
== External links ==
Out of Core SVD and QR
Out of core graphics
Scalapack design | Wikipedia/Cache-aware_model |
In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory that was allocated by the program, but is no longer referenced; such memory is called garbage. Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp.
Garbage collection relieves the programmer from doing manual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so. Other, similar techniques include stack allocation, region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affect performance as a result.
Resources other than memory, such as network sockets, database handles, windows, file descriptors, and device descriptors, are not typically handled by garbage collection, but rather by other methods (e.g. destructors). Some such methods de-allocate memory also.
== Overview ==
Many programming languages require garbage collection, either as part of the language specification (e.g., RPL, Java, C#, D, Go, and most scripting languages) or effectively for practical implementation (e.g., formal languages like lambda calculus). These are said to be garbage-collected languages. Other languages, such as C and C++, were designed for use with manual memory management, but have garbage-collected implementations available. Some languages, like Ada, Modula-3, and C++/CLI, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects. Still others, like D, are garbage-collected but allow the user to manually delete objects or even disable garbage collection entirely when speed is required.
Although many languages integrate GC into their compiler and runtime system, post-hoc GC systems also exist, such as Automatic Reference Counting (ARC). Some of these post-hoc GC systems do not require recompilation.
=== Advantages ===
GC frees the programmer from manually de-allocating memory. This helps avoid some kinds of errors:
Dangling pointers, which occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is dereferenced. By then the memory may have been reassigned to another use, with unpredictable results.
Double free bugs, which occur when the program tries to free a region of memory that has already been freed, and perhaps already been allocated again.
Certain kinds of memory leaks, in which a program fails to free memory occupied by objects that have become unreachable, which can lead to memory exhaustion.
=== Disadvantages ===
GC uses computing resources to decide which memory to free. Therefore, the penalty for the convenience of not annotating object lifetime manually in the source code is overhead, which can impair program performance. A peer-reviewed paper from 2005 concluded that GC needs five times the memory to compensate for this overhead and to perform as fast as the same program using idealized explicit memory management. The comparison however is made to a program generated by inserting deallocation calls using an oracle, implemented by collecting traces from programs run under a profiler, and the program is only correct for one particular execution of the program. Interaction with memory hierarchy effects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing. The impact on performance was given by Apple as a reason for not adopting garbage collection in iOS, despite it being the most desired feature.
The moment when the garbage is actually collected can be unpredictable, resulting in stalls (pauses to shift/free memory) scattered throughout a session. Unpredictable stalls can be unacceptable in real-time environments, in transaction processing, or in interactive programs. Incremental, concurrent, and real-time garbage collectors address these problems, with varying trade-offs.
== Strategies ==
=== Tracing ===
Tracing garbage collection is the most common type of garbage collection, so much so that "garbage collection" often refers to tracing garbage collection, rather than other methods such as reference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects are reachable by a chain of references from certain root objects, and considering the rest as garbage and collecting them. However, there are a large number of algorithms used in implementation, with widely varying complexity and performance characteristics.
=== Reference counting ===
Reference counting garbage collection is where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created and decremented when a reference is destroyed. When the count reaches zero, the object's memory is reclaimed.
As with manual memory management, and unlike tracing garbage collection, reference counting guarantees that objects are destroyed as soon as their last reference is destroyed, and usually only accesses memory which is either in CPU caches, in objects to be freed, or directly pointed to by those, and thus tends to not have significant negative side effects on CPU cache and virtual memory operation.
There are a number of disadvantages to reference counting; this can generally be solved or mitigated by more sophisticated algorithms:
Cycles
If two or more objects refer to each other, they can create a cycle whereby neither will be collected as their mutual references never let their reference counts become zero. Some garbage collection systems using reference counting (like the one in CPython) use specific cycle-detecting algorithms to deal with this issue. Another strategy is to use weak references for the "backpointers" which create cycles. Under reference counting, a weak reference is similar to a weak reference under a tracing garbage collector. It is a special reference object whose existence does not increment the reference count of the referent object. Furthermore, a weak reference is safe in that when the referent object becomes garbage, any weak reference to it lapses, rather than being permitted to remain dangling, meaning that it turns into a predictable value, such as a null reference.
Space overhead (reference count)
Reference counting requires space to be allocated for each object to store its reference count. The count may be stored adjacent to the object's memory or in a side table somewhere else, but in either case, every single reference-counted object requires additional storage for its reference count. Memory space with the size of an unsigned pointer is commonly used for this task, meaning that 32 or 64 bits of reference count storage must be allocated for each object. On some systems, it may be possible to mitigate this overhead by using a tagged pointer to store the reference count in unused areas of the object's memory. Often, an architecture does not actually allow programs to access the full range of memory addresses that could be stored in its native pointer size; a certain number of high bits in the address is either ignored or required to be zero. If an object reliably has a pointer at a certain location, the reference count can be stored in the unused bits of the pointer. For example, each object in Objective-C has a pointer to its class at the beginning of its memory; on the ARM64 architecture using iOS 7, 19 unused bits of this class pointer are used to store the object's reference count.
Speed overhead (increment/decrement)
In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters. However, in a common case when a reference is copied from an outer scope variable into an inner scope variable, such that the lifetime of the inner variable is bounded by the lifetime of the outer one, the reference incrementing can be eliminated. The outer variable "owns" the reference. In the programming language C++, this technique is readily implemented and demonstrated with the use of const references. Reference counting in C++ is usually implemented using "smart pointers" whose constructors, destructors, and assignment operators manage the references. A smart pointer can be passed by reference to a function, which avoids the need to copy-construct a new smart pointer (which would increase the reference count on entry into the function and decrease it on exit). Instead, the function receives a reference to the smart pointer which is produced inexpensively. The Deutsch-Bobrow method of reference counting capitalizes on the fact that most reference count updates are in fact generated by references stored in local variables. It ignores these references, only counting references in the heap, but before an object with reference count zero can be deleted, the system must verify with a scan of the stack and register that no other reference to it still exists. A further substantial decrease in the overhead on counter updates can be obtained by update coalescing introduced by Levanoni and Petrank. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object O1, then to an object O2, and so forth until at the end of the interval it points to some object On. A reference counting algorithm would typically execute rc(O1)--, rc(O2)++, rc(O2)--, rc(O3)++, rc(O3)--, ..., rc(On)++. But most of these updates are redundant. In order to have the reference count properly evaluated at the end of the interval it is enough to perform rc(O1)-- and rc(On)++. Levanoni and Petrank measured an elimination of more than 99% of the counter updates in typical Java benchmarks.
Requires atomicity
When used in a multithreaded environment, these modifications (increment and decrement) may need to be atomic operations such as compare-and-swap, at least for any objects which are shared, or potentially shared among multiple threads. Atomic operations are expensive on a multiprocessor, and even more expensive if they have to be emulated with software algorithms. It is possible to avoid this issue by adding per-thread or per-CPU reference counts and only accessing the global reference count when the local reference counts become or are no longer zero (or, alternatively, using a binary tree of reference counts, or even giving up deterministic destruction in exchange for not having a global reference count at all), but this adds significant memory overhead and thus tends to be only useful in special cases (it is used, for example, in the reference counting of Linux kernel modules). Update coalescing by Levanoni and Petrank can be used to eliminate all atomic operations from the write-barrier. Counters are never updated by the program threads in the course of program execution. They are only modified by the collector which executes as a single additional thread with no synchronization. This method can be used as a stop-the-world mechanism for parallel programs, and also with a concurrent reference counting collector.
Not real-time
Naive implementations of reference counting do not generally provide real-time behavior, because any pointer assignment can potentially cause a number of objects bounded only by total allocated memory size to be recursively freed while the thread is unable to perform other work. It is possible to avoid this issue by delegating the freeing of unreferenced objects to other threads, at the cost of extra overhead.
=== Escape analysis ===
Escape analysis is a compile-time technique that can convert heap allocations to stack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs.
== Availability ==
Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In some languages lacking built-in garbage collection, it can be added through a library, as with the Boehm garbage collector for C and C++.
Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp is especially notable as both the first functional programming language and the first language to introduce garbage collection.
Other dynamic languages, such as Ruby and Julia (but not Perl 5 or PHP before version 5.3, which both use reference counting), JavaScript and ECMAScript also tend to use GC. Object-oriented programming languages such as Smalltalk, ooRexx, RPL and Java usually provide integrated garbage collection. Notable exceptions are C++ and Delphi, which have destructors.
=== BASIC ===
BASIC and Logo have often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC interpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting in
O
(
n
2
)
{\displaystyle O(n^{2})}
performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.SYSTEM, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster.
=== Objective-C ===
While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector.
However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbade the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC.
=== Limited environments ===
Garbage collection is rarely used on embedded or real-time systems because of the usual need for very tight control over the use of limited resources. However, garbage collectors compatible with many limited environments have been developed. The Microsoft .NET Micro Framework, .NET nanoFramework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection.
=== Java ===
Garbage collectors available in Java OpenJDKs virtual machine (JVM) include:
Serial
Parallel
CMS (Concurrent Mark Sweep)
G1 (Garbage-First)
ZGC (Z Garbage Collector)
Epsilon
Shenandoah
GenZGC (Generational ZGC)
GenShen (Generational Shenandoah)
IBM Metronome (only in IBM OpenJDK)
SAP (only in SAP OpenJDK)
Azul C4 (Continuously Concurrent Compacting Collector) (only in Azul Systems OpenJDK)
=== Compile-time use ===
Compile-time garbage collection is a form of static analysis allowing memory to be reused and reclaimed based on invariants known during compilation.
This form of garbage collection has been studied in the Mercury programming language, and it saw greater usage with the introduction of LLVM's automatic reference counter (ARC) into Apple's ecosystem (iOS and OS X) in 2011.
=== Real-time systems ===
Incremental, concurrent, and real-time garbage collectors have been developed, for example by Henry Baker and by Henry Lieberman.
In Baker's algorithm, the allocation is done in either half of a single region of memory. When it becomes half full, a garbage collection is performed which moves the live objects into the other half and the remaining objects are implicitly deallocated. The running program (the 'mutator') has to check that any object it references is in the correct half, and if not move it across, while a background task is finding all of the objects.
Generational garbage collection schemes are based on the empirical observation that most objects die young. In generational garbage collection, two or more allocation regions (generations) are kept, which are kept separate based on the object's age. New objects are created in the "young" generation that is regularly collected, and when a generation is full, the objects that are still referenced from older regions are copied into the next oldest generation. Occasionally a full scan is performed.
Some high-level language computer architectures include hardware support for real-time garbage collection.
Most implementations of real-time garbage collectors use tracing. Such real-time garbage collectors meet hard real-time constraints when used with a real-time operating system.
== See also ==
Destructor (computer programming)
Dynamic dead-code elimination
Smart pointer
Virtual memory compression
== References ==
== Further reading ==
Jones, Richard; Hosking, Antony; Moss, J. Eliot B. (2011-08-16). The Garbage Collection Handbook: The Art of Automatic Memory Management. CRC Applied Algorithms and Data Structures Series. Chapman and Hall / CRC Press / Taylor & Francis Ltd. ISBN 978-1-4200-8279-1. (511 pages)
Jones, Richard; Lins, Rafael (1996-07-12). Garbage Collection: Algorithms for Automatic Dynamic Memory Management (1 ed.). Wiley. ISBN 978-0-47194148-4. (404 pages)
Schorr, Herbert; Waite, William M. (August 1967). "An Efficient Machine-Independent Procedure for Garbage Collection in Various List Structures" (PDF). Communications of the ACM. 10 (8): 501–506. doi:10.1145/363534.363554. S2CID 5684388. Archived (PDF) from the original on 2021-01-22.
Wilson, Paul R. (1992). "Uniprocessor Garbage Collection Techniques". Memory Management. Lecture Notes in Computer Science. Vol. 637. Springer-Verlag. pp. 1–42. CiteSeerX 10.1.1.47.2438. doi:10.1007/bfb0017182. ISBN 3-540-55940-X. {{cite book}}: |journal= ignored (help)
Wilson, Paul R.; Johnstone, Mark S.; Neely, Michael; Boles, David (1995). "Dynamic Storage Allocation: A Survey and Critical Review". Memory Management. Lecture Notes in Computer Science. Vol. 986 (1 ed.). pp. 1–116. CiteSeerX 10.1.1.47.275. doi:10.1007/3-540-60368-9_19. ISBN 978-3-540-60368-9. {{cite book}}: |journal= ignored (help)
== External links ==
The Memory Management Reference
The Very Basics of Garbage Collection
Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
TinyGC - an independent implementation of the BoehmGC API
Conservative Garbage Collection Implementation for C Language
MeixnerGC - an incremental mark and sweep garbage collector for C++ using smart pointers | Wikipedia/Garbage_collection_(computer_science) |
In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviours. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture, because they are usually architecturally significant requirements.
In software architecture, non-functional requirements are known as "architectural characteristics". Note that synchronous communication between software architectural components entangles them, and they must share the same architectural characteristics.
== Definition ==
Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be. Functional requirements are usually in the form of "system shall do <requirement>", an individual action or part of the system, perhaps explicitly in the sense of a mathematical function, a black box description input, output, process and control functional model or IPO model. In contrast, non-functional requirements are in the form of "system shall be <requirement>", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed.
Non-functional requirements are often called the "quality attributes" of a system. The emergent properties of a system are classified as non-functional requirements. Other terms for non-functional requirements are "qualities", "quality goals", "quality of service requirements", "constraints", "non-behavioral requirements", or "technical requirements". Informally these are sometimes called the "ilities", from attributes like stability and portability. Qualities—that is non-functional requirements—can be divided into two main categories:
Execution qualities, such as safety, security and usability, which are observable during operation (at run time).
Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the static structure of the system.
It is important to specify non-functional requirements in a specific and measurable way.
== Classification of non-functional requirements ==
Common non-functional classifications, relevant for all types of systems include
Performance
Reliability, availability, maintainability and safety
Scalability
Testability
Specific type of systems explicitly enumerate categories of non-functional requirements in their standards
Hardware systems
Embedded systems
Safety-critical systems
Software systems
== Examples ==
A system may be required to present the user with a display of the number of records in a database. This is a functional requirement. How current this number needs to be, is a non-functional requirement. If the number needs to be updated in real time, the system architects must ensure that the system is capable of displaying the record count within an acceptably short interval of the number of records changing.
Sufficient network bandwidth may be a non-functional requirement of a system. Other examples include:
== See also ==
ISO/IEC 25010:2011
Consortium for IT Software Quality
ISO/IEC 9126
FURPS
Requirements analysis
Usability requirements
Non-Functional Requirements framework
Architecturally Significant Requirements
SNAP Points
== References ==
== Notes ==
== External links ==
Petter L. H. Eide (2005). "Quantification and Traceability of Requirements". CiteSeerX 10.1.1.95.6464.
Dalbey, John. "Nonfunctional Requirements". Csc.calpoly.edu. Retrieved 3 October 2017.
"Modeling Non-Functional Aspects in Service Oriented Architecture" (PDF). Cs.umb.edu. Archived from the original (PDF) on 24 July 2011. Retrieved 3 October 2017.
"Non-Functional Requirements: Do User Stories Really Help?". Methodsandtools.com. Retrieved 3 October 2017.
"Non-Functional Requirements Be Here - CISQ - Consortium for IT Software Quality". it-cisq.org. Retrieved 3 October 2017.
""Do Software Architectures Meet Extra-Functional or Non-Functional Requirements?"". 19 November 2020. | Wikipedia/Non-functional_requirement |
In computer programming, a function (also procedure, method, subroutine, routine, or subprogram) is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times.
Callable units provide a powerful programming tool. The primary purpose is to allow for the decomposition of a large and/or complicated problem into chunks that have relatively low cognitive load and to assign the chunks meaningful names (unless they are anonymous). Judicious application can reduce the cost of developing and maintaining software, while increasing its quality and reliability.
Callable units are present at multiple levels of abstraction in the programming environment. For example, a programmer may write a function in source code that is compiled to machine code that implements similar semantics. There is a callable unit in the source code and an associated one in the machine code, but they are different kinds of callable units – with different implications and features.
== Terminology ==
Some programming languages, such as COBOL and BASIC, make a distinction between functions that return a value (typically called "functions") and those that do not (typically called "subprogram", "subroutine", or "procedure"). Other programming languages, such as C, C++, and Rust, only use the term "function" irrespective of whether they return a value or not. Some object-oriented languages, such as Java and C#, refer to functions inside classes as "methods".
== History ==
The idea of a callable unit was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines." Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack.
The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site.
Subroutines were implemented in Konrad Zuse's Z4 in 1945.
In 1945, Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines.
In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery'
under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting
...the structure of the machine need not be complicated one bit. It is possible, since all the logical characteristics essential to this procedure are available, to evolve a coding instruction for placing the subroutines in the memory at places known to the machine, and in such a way that they may easily be called into use.In other words, one can designate subroutine A as division and subroutine B as complex multiplication and subroutine C as the evaluation of a standard error of a sequence of numbers, and so on through the list of subroutines needed for a particular problem. ... All these subroutines will then be stored in the machine, and all one needs to do is make a brief reference to them by number, as they are indicated in the coding.
Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II. She and the other ENIAC programmers used the subroutines to help calculate missile trajectories.
Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines.
Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The Burroughs B5000 (1961) is one of the first computers to store subroutine return data on a stack.
The DEC PDP-6 (1964) is one of the first accumulator-based machines to have a subroutine call instruction that saved the return address in a stack addressed by an accumulator or index register. The later PDP-10 (1966), PDP-11 (1970) and VAX-11 (1976) lines followed suit; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines.
=== Language support ===
In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together.
One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming.
=== Libraries ===
Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs.
Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program (or "mainline"); and the same subroutine tape could then be used by many different programs. A similar approach was used in computers that loaded program instructions from punched cards. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or decks of cards for collective use.
=== Return by indirect jump ===
To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address.
On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable.
=== Jump to subroutine ===
Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly.
In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack.
In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example
to call a subroutine called MYSUB from the main program. The subroutine would be coded as
The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB.
Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls.
Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC.
=== Call stack ===
Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address.
The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose.
The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter.
Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible.
When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management.
However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data.
In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records.
==== Delayed stacking ====
One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both.
This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns.
== Features ==
In general, a callable unit is a list of instructions that, starting at the first instruction, executes sequentially except as directed via its internal logic. It can be invoked (called) many times during the execution of a program. Execution continues at the next instruction after the call instruction when it returns control.
== Implementations ==
The features of implementations of callable units evolved over time and varies by context.
This section describes features of the various common implementations.
=== General characteristics ===
Most modern programming languages provide features to define and call functions, including syntax for accessing such features, including:
Delimit the implementation of a function from the rest of the program
Assign an identifier, name, to a function
Define formal parameters with a name and data type for each
Assign a data type to the return value, if any
Specify a return value in the function body
Call a function
Provide actual parameters that correspond to a called function's formal parameters
Return control to the caller at the point of call
Consume the return value in the caller
Dispose of the values returned by a call
Provide a private naming scope for variables
Identify variables outside the function that are accessible within it
Propagate an exceptional condition out of a function and to handle it in the calling context
Package functions into a container such as module, library, object, or class
=== Naming ===
Some languages, such as Pascal, Fortran, Ada and many dialects of BASIC, use a different name for a callable unit that returns a value (function or subprogram) vs. one that does not (subroutine or procedure).
Other languages, such as C, C++, C# and Lisp, use only one name for a callable unit, function. The C-family languages use the keyword void to indicate no return value.
=== Call syntax ===
If declared to return a value, a call can be embedded in an expression in order to consume the return value. For example, a square root callable unit might be called like y = sqrt(x).
A callable unit that does not return a value is called as a stand-alone statement like print("hello"). This syntax can also be used for a callable unit that returns a value, but the return value will be ignored.
Some older languages require a keyword for calls that do not consume a return value, like CALL print("hello").
=== Parameters ===
Most implementations, especially in modern languages, support parameters which the callable declares as formal parameters. A caller passes actual parameters, a.k.a. arguments, to match. Different programming languages provide different conventions for passing arguments.
=== Return value ===
In some languages, such as BASIC, a callable has different syntax (i.e. keyword) for a callable that returns a value vs. one that does not.
In other languages, the syntax is the same regardless.
In some of these languages an extra keyword is used to declare no return value; for example void in C, C++ and C#.
In some languages, such as Python, the difference is whether the body contains a return statement with a value, and a particular callable may return with or without a value based on control flow.
=== Side effects ===
In many contexts, a callable may have side effect behavior such as modifying passed or global data, reading from or writing to a peripheral device, accessing a file, halting the program or the machine, or temporarily pausing program execution.
Side effects are considered undesireble by Robert C. Martin, who is known for promoting design principles. Martin argues that side effects can result in temporal coupling or order dependencies.
In strictly functional programming languages such as Haskell, a function can have no side effects, which means it cannot change the state of the program. Functions always return the same result for the same input. Such languages typically only support functions that return a value, since there is no value in a function that has neither return value nor side effect.
=== Local variables ===
Most contexts support local variables – memory owned by a callable to hold intermediate values. These variables are typically stored in the call's activation record on the call stack along with other information such as the return address.
=== Nested call – recursion ===
If supported by the language, a callable may call itself, causing its execution to suspend while another nested execution of the same callable executes. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages provide a new copy of local variables on each call. If the programmer desires the recursive callable to use the same variables instead of using locals, they typically declare them in a shared context such static or global.
Languages going back to ALGOL, PL/I and C and modern languages, almost invariably use a call stack, usually supported by the instruction sets to provide an activation record for each call. That way, a nested call can modify its local variables without affecting any of the suspended calls variables.
Recursion allows direct implementation of functionality defined by mathematical induction and recursive divide and conquer algorithms. Here is an example of a recursive function in C/C++ to find Fibonacci numbers:
Early languages like Fortran did not initially support recursion because only one set of variables and return address were allocated for each callable. Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer.
=== Nested scope ===
Some languages, e.g., Ada, Pascal, PL/I, Python, support declaring and defining a function inside, e.g., a function body, such that the name of the inner is only visible within the body of the outer.
=== Reentrancy ===
If a callable can be executed properly even when another execution of the same callable is already in progress, that callable is said to be reentrant. A reentrant callable is also useful in multi-threaded situations since multiple threads can call the same callable without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads.
=== Overloading ===
Some languages support overloading – allow multiple callables with the same name in the same scope, but operating on different types of input. Consider the square root function applied to real number, complex number and matrix input. The algorithm for each type of input is different, and the return value may have a different type. By writing three separate callables with the same name. i.e. sqrt, the resulting code may be easier to write and to maintain since each one has a name that is relatively easy to understand and to remember instead of giving longer and more complicated names like sqrt_real, sqrt_complex, qrt_matrix.
Overloading is supported in many languages that support strong typing. Often the compiler selects the overload to call based on the type of the input arguments or it fails if the input arguments do not select an overload. Older and weakly-typed languages generally do not support overloading.
Here is an example of overloading in C++, two functions Area that accept different types:
PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example:
DECLARE gen_name GENERIC(
name WHEN(FIXED BINARY),
flame WHEN(FLOAT),
pathname OTHERWISE);
Multiple argument definitions may be specified for each entry. A call to "gen_name" will result in a call to "name" when the argument is FIXED BINARY, "flame" when FLOAT", etc. If the argument matches none of the choices "pathname" will be called.
=== Closure ===
A closure is a callable plus values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects.
=== Exception reporting ===
Besides its happy path behavior, a callable may need to inform the caller about an exceptional condition that occurred during its execution.
Most modern languages support exceptions which allows for exceptional control flow that pops the call stack until an exception handler is found to handle the condition.
Languages that do not support exceptions can use the return value to indicate success or failure of a call. Another approach is to use a well-known location like a global variable for success indication. A callable writes the value and the caller reads it after a call.
In the IBM System/360, where return code was expected from a subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example:
=== Call overhead ===
A call has runtime overhead, which may include but is not limited to:
Allocating and reclaiming call stack storage
Saving and restoring processor registers
Copying input variables
Copying values after the call into the caller's context
Automatic testing of the return code
Handling of exceptions
Dispatching such as for a virtual method in an object-oriented language
Various techniques are employed to minimize the runtime cost of calls.
==== Compiler optimization ====
Some optimizations for minimizing call overhead may seem straight forward, but cannot be used if the callable has side effects. For example, in the expression (f(x)-1)/(f(x)+1), the function f cannot be called only once with its value used two times since the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a callable has a side effect is difficult – indeed, undecidable by virtue of Rice's theorem. So, while this optimization is safe in a purely functional programming language, a compiler for a language not limited to functional typically assumes the worst case, that every callable may have side effects.
==== Inlining ====
Inlining eliminates calls for particular callables. The compiler replaces each call with the compiled code of the callable. Not only does this avoid the call overhead, but it also allows the compiler to optimize code of the caller more effectively by taking into account the context and arguments at that call. Inlining, however, usually increases the compiled code size, except when only called once or the body is very short, like one line.
=== Sharing ===
Callables can be defined within a program, or separately in a library that can be used by multiple programs.
=== Inter-operability ===
A compiler translates call and return statements into machine instructions according to a well-defined calling convention. For code compiled by the same or a compatible compiler, functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue.
=== Built-in functions ===
A built-in function, or builtin function, or intrinsic function, is a function for which the compiler generates code at compile time or provides in a way other than for other functions. A built-in function does not need to be defined like other functions since it is built in to the programming language.
== Programming ==
=== Trade-offs ===
==== Advantages ====
Advantages of breaking a program into functions include:
Decomposing a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures
Reducing duplicate code within a program
Enabling reuse of code across multiple programs
Dividing a large programming task among various programmers or various stages of a project
Hiding implementation details from users of the function
Improving readability of code by replacing a block of code with a function call where a descriptive function name serves to describe the block of code. This makes the calling code concise and readable even if the function is not meant to be reused.
Improving traceability (i.e. most languages offer ways to obtain the call trace which includes the names of the involved functions and perhaps even more information such as file names and line numbers); by not decomposing the code into functions, debugging would be severely impaired
==== Disadvantages ====
Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism.
A function typically requires standard housekeeping code – both at the entry to, and exit from, the function (function prologue and epilogue – usually saving general purpose registers and return address as a minimum).
=== Conventions ===
Many programming conventions have been developed regarding callables.
With respect to naming, many developers name a callable with a phrase starting with a verb when it does a certain task, with an adjective when it makes an inquiry, and with a noun when it is used to substitute variables.
Some programmers suggest that a callable should perform exactly one task, and if it performs more than one task, it should be split up into multiple callables. They argue that callables are key components in software maintenance, and their roles in the program must remain distinct.
Proponents of modular programming advocate that each callable should have minimal dependency on the rest of the codebase. For example, the use of global variables is generally deemed unwise, because it adds coupling between all callables that use the global variables. If such coupling is not necessary, they advise to refactor callables to accept passed parameters instead.
== Examples ==
=== Early BASIC ===
Early BASIC variants require each line to have a unique number (line number) that orders the lines for execution, provides no separation of the code that is callable, no mechanism for passing arguments or to return a value and all variables are global. It provides the command GOSUB where sub is short for sub procedure, subprocedure or subroutine. Control jumps to the specified line number and then continues on the next line on return.
This code repeatedly asks the user to enter a number and reports the square root of the value. Lines 100-130 are the callable.
=== Small Basic ===
In Microsoft Small Basic, targeted to the student first learning how to program in a text-based language, a callable unit is called a subroutine.
The Sub keyword denotes the start of a subroutine and is followed by a name identifier. Subsequent lines are the body which ends with the EndSub keyword.
This can be called as SayHello().
=== Visual Basic ===
In later versions of Visual Basic (VB), including the latest product line and VB6, the term procedure is used for the callable unit concept. The keyword Sub is used to return no value and Function to return a value. When used in the context of a class, a procedure is a method.
Each parameter has a data type that can be specified, but if not, defaults to Object for later versions based on .NET and variant for VB6.
VB supports parameter passing conventions by value and by reference via the keywords ByVal and ByRef, respectively.
Unless ByRef is specified, an argument is passed ByVal. Therefore, ByVal is rarely explicitly specified.
For a simple type like a number these conventions are relatively clear. Passing ByRef allows the procedure to modify the passed variable whereas passing ByVal does not. For an object, semantics can confuse programmers since an object is always treated as a reference. Passing an object ByVal copies the reference; not the state of the object. The called procedure can modify the state of the object via its methods yet cannot modify the object reference of the actual parameter.
The does not return a value and has to be called stand-alone, like DoSomething
This returns the value 5, and a call can be part of an expression like y = x + GiveMeFive()
This has a side-effect – modifies the variable passed by reference and could be called for variable v like AddTwo(v). Giving v is 5 before the call, it will be 7 after.
=== C and C++ ===
In C and C++, a callable unit is called a function.
A function definition starts with the name of the type of value that it returns or void to indicate that it does not return a value. This is followed by the function name, formal arguments in parentheses, and body lines in braces.
In C++, a function declared in a class (as non-static) is called a member function or method. A function outside of a class can be called a free function to distinguish it from a member function.
This function does not return a value and is always called stand-alone, like doSomething()
This function returns the integer value 5. The call can be stand-alone or in an expression like y = x + giveMeFive()
This function has a side-effect – modifies the value passed by address to the input value plus 2. It could be called for variable v as addTwo(&v) where the ampersand (&) tells the compiler to pass the address of a variable. Giving v is 5 before the call, it will be 7 after.
This function requires C++ – would not compile as C. It has the same behavior as the preceding example but passes the actual parameter by reference rather than passing its address. A call such as addTwo(v) does not include an ampersand since the compiler handles passing by reference without syntax in the call.
=== PL/I ===
In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A (trivial) function to change the sign of each element of a two-dimensional array might look like:
change_sign: procedure(array);
declare array(*,*) float;
array = -array;
end change_sign;
This could be called with various arrays as follows:
/* first array bounds from -5 to +10 and 3 to 9 */
declare array1 (-5:10, 3:9)float;
/* second array bounds from 1 to 16 and 1 to 16 */
declare array2 (16,16) float;
call change_sign(array1);
call change_sign(array2);
=== Python ===
In Python, the keyword def denotes the start of a function definition. The statements of the function body follow as indented on subsequent lines and end at the line that is indented the same as the first line or end of file.
The first function returns greeting text that includes the name passed by the caller. The second function calls the first and is called like greet_martin() to write "Welcome Martin" to the console.
=== Prolog ===
In the procedural interpretation of logic programs, logical implications behave as goal-reduction procedures. A rule (or clause) of the form:
A :- B
which has the logical reading:
A if B
behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB.
Consider, for example, the Prolog program:
Notice that the motherhood function, X = mother(Y) is represented by a relation, as in a relational database. However, relations in Prolog function as callable units.
For example, the procedure call ?- parent_child(X, charles) produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example:
== See also ==
Asynchronous procedure call, a subprogram that is called after its parameters are set by other activities
Command–query separation (CQS)
Compound operation
Coroutines, subprograms that call each other as if both were the main programs
Evaluation strategy
Event handler, a subprogram that is called in response to an input event or interrupt
Function (mathematics)
Functional programming
Fused operation
Intrinsic function
Lambda function (computer programming), a function that is not bound to an identifier
Logic programming
Modular programming
Operator overloading
Protected procedure
Transclusion
== References == | Wikipedia/Function_call |
In computer science, the general meaning of input is to provide or give something to the computer, in other words, when a computer or device is receiving a command or signal from outer sources, the event is referred to as input to the device.
Some computer devices can also be categorized as input devices, because devices are used to send instructions to the computer, some common examples of computer input devices are:
Mouse
Keyboard
Touchscreen
Microphone
Webcam
Softcam
Touchpad
Trackpad
Image scanner
Trackball
Many internal components of computer are input components to other components, like the power-on button of a computer is an input component for the processor or the power supply, because it takes user input and sends it to other components for further processing.
In many computer languages the keyword "input" is used as a special keyword or function, such as in Visual Basic or Python. The command "input" is used to give the machine the data it has to process.
== See also ==
Input method
Input device
Input/output
== References == | Wikipedia/Input_(computer_science) |
In computer science, program optimization, code optimization, or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or to make it capable of operating with less memory storage or other resources, or draw less power.
== Overview ==
Although the term "optimization" is derived from "optimum", achieving a truly optimal system is rare in practice, which is referred to as superoptimization. Optimization typically focuses on improving a system with respect to a specific quality metric rather than making it universally optimal. This often leads to trade-offs, where enhancing one metric may come at the expense of another. One popular example is space-time tradeoff, reducing a program’s execution time by increasing its memory consumption. Conversely, in scenarios where memory is limited, engineers might prioritize a slower algorithm to conserve space. There is rarely a single design that can excel in all situations, requiring engineers to prioritize attributes most relevant to the application at hand.
Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained. Consequently, optimization processes usually stop once sufficient improvements are achieved, without striving for perfection. Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reaching diminishing returns.
== Levels of optimization ==
Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed. Thus optimization can typically proceed via refinement from higher to lower, with initial gains being larger and achieved with less work, and later gains being smaller and requiring more work. However, in some cases overall performance depends on performance of very low-level portions of a program, and small changes at a late stage or early consideration of low-level details can have outsized impact. Typically some consideration is given to efficiency throughout a project – though this varies significantly – but major optimization is often considered a refinement to be done late, if ever. On longer-running projects there are typically cycles of optimization, where improving one area reveals limitations in another, and these are typically curtailed when performance is acceptable or gains become too small or costly.
As performance is part of the specification of a program – a program that is unusably slow is not fit for purpose: a video game with 60 Hz (frames-per-second) is acceptable, but 6 frames-per-second is unacceptably choppy – performance is a consideration from the start, to ensure that the system is able to deliver sufficient performance, and early prototypes need to have roughly acceptable performance for there to be confidence that the final system will (with optimization) achieve acceptable performance. This is sometimes omitted in the belief that optimization can always be done later, resulting in prototype systems that are far too slow – often by an order of magnitude or more – and systems that ultimately are failures because they architecturally cannot achieve their performance goals, such as the Intel 432 (1981); or ones that take years of work to achieve acceptable performance, such as Java (1995), which only achieved acceptable performance with HotSpot (1999). The degree to which performance changes between prototype and production system, and how amenable it is to optimization, can be a significant source of uncertainty and risk.
=== Design level ===
At the highest level, the design may be optimized to make best use of the available resources, given goals, constraints, and expected use/load. The architectural design of a system overwhelmingly affects its performance. For example, a system that is network latency-bound (where network latency is the main constraint on overall performance) would be optimized to minimize network trips, ideally making a single request (or no requests, as in a push protocol) rather than multiple roundtrips. Choice of design depends on the goals: when designing a compiler, if fast compilation is the key priority, a one-pass compiler is faster than a multi-pass compiler (assuming same work), but if speed of output code is the goal, a slower multi-pass compiler fulfills the goal better, even though it takes longer itself. Choice of platform and programming language occur at this level, and changing them frequently requires a complete rewrite, though a modular system may allow rewrite of only some component – for example, for a Python program one may rewrite performance-critical sections in C. In a distributed system, choice of architecture (client-server, peer-to-peer, etc.) occurs at the design level, and may be difficult to change, particularly if all components cannot be replaced in sync (e.g., old clients).
=== Algorithms and data structures ===
Given an overall design, a good choice of efficient algorithms and data structures, and efficient implementation of these algorithms and data structures comes next. After design, the choice of algorithms and data structures affects efficiency more than any other aspect of the program. Generally data structures are more difficult to change than algorithms, as a data structure assumption and its performance assumptions are used throughout the program, though this can be minimized by the use of abstract data types in function definitions, and keeping the concrete data structure definitions restricted to a few places.
For algorithms, this primarily consists of ensuring that algorithms are constant O(1), logarithmic O(log n), linear O(n), or in some cases log-linear O(n log n) in the input (both in space and time). Algorithms with quadratic complexity O(n2) fail to scale, and even linear algorithms cause problems if repeatedly called, and are typically replaced with constant or logarithmic if possible.
Beyond asymptotic order of growth, the constant factors matter: an asymptotically slower algorithm may be faster or smaller (because simpler) than an asymptotically faster algorithm when they are both faced with small input, which may be the case that occurs in reality. Often a hybrid algorithm will provide the best performance, due to this tradeoff changing with size.
A general technique to improve performance is to avoid work. A good example is the use of a fast path for common cases, improving performance by avoiding unnecessary work. For example, using a simple text layout algorithm for Latin text, only switching to a complex layout algorithm for complex scripts, such as Devanagari. Another important technique is caching, particularly memoization, which avoids redundant computations. Because of the importance of caching, there are often many levels of caching in a system, which can cause problems from memory use, and correctness issues from stale caches.
=== Source code level ===
Beyond general algorithms and their implementation on an abstract machine, concrete source code level choices can make a significant difference. For example, on early C compilers, while(1) was slower than for(;;) for an unconditional loop, because while(1) evaluated 1 and then had a conditional jump which tested if it was true, while for (;;) had an unconditional jump . Some optimizations (such as this one) can nowadays be performed by optimizing compilers. This depends on the source language, the target machine language, and the compiler, and can be both difficult to understand or predict and changes over time; this is a key place where understanding of compilers and machine code can improve performance. Loop-invariant code motion and return value optimization are examples of optimizations that reduce the need for auxiliary variables and can even result in faster performance by avoiding round-about optimizations.
=== Build level ===
Between the source and compile level, directives and build flags can be used to tune performance options in the source code and compiler respectively, such as using preprocessor defines to disable unneeded software features, optimizing for specific processor models or hardware capabilities, or predicting branching, for instance. Source-based software distribution systems such as BSD's Ports and Gentoo's Portage can take advantage of this form of optimization.
=== Compile level ===
Use of an optimizing compiler tends to ensure that the executable program is optimized at least as much as the compiler can predict.
=== Assembly level ===
At the lowest level, writing code using an assembly language, designed for a particular hardware platform can produce the most efficient and compact code if the programmer takes advantage of the full repertoire of machine instructions. Many operating systems used on embedded systems have been traditionally written in assembler code for this reason. Programs (other than very small programs) are seldom written from start to finish in assembly due to the time and cost involved. Most are compiled down from a high level language to assembly and hand optimized from there. When efficiency and size are less important large parts may be written in a high-level language.
With more modern optimizing compilers and the greater complexity of recent CPUs, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step.
Much of the code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code.
Typically today rather than writing in assembly language, programmers will use a disassembler to analyze the output of a compiler and change the high-level source code so that it can be compiled more efficiently, or understand why it is inefficient.
=== Run time ===
Just-in-time compilers can produce customized machine code based on run-time data, at the cost of compilation overhead. This technique dates to the earliest regular expression engines, and has become widespread with Java HotSpot and V8 for JavaScript. In some cases adaptive optimization may be able to perform run time optimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors.
Profile-guided optimization is an ahead-of-time (AOT) compilation optimization technique based on run time profiles, and is similar to a static "average case" analog of the dynamic technique of adaptive optimization.
Self-modifying code can alter itself in response to run time conditions in order to optimize code; this was more common in assembly language programs.
Some CPU designs can perform some optimizations at run time. Some examples include out-of-order execution, speculative execution, instruction pipelines, and branch predictors. Compilers can help the program take advantage of these CPU features, for example through instruction scheduling.
=== Platform dependent and independent optimizations ===
Code optimization can be also broadly categorized as platform-dependent and platform-independent techniques. While the latter ones are effective on most or all platforms, platform-dependent techniques use specific properties of one platform, or rely on parameters depending on the single platform or even on the single processor. Writing or producing different versions of the same code for different processors might therefore be needed. For instance, in the case of compile-level optimization, platform-independent techniques are generic techniques (such as loop unrolling, reduction in function calls, memory efficient routines, reduction in conditions, etc.), that impact most CPU architectures in a similar way. A great example of platform-independent optimization has been shown with inner for loop, where it was observed that a loop with an inner for loop performs more computations per unit time than a loop without it or one with an inner while loop. Generally, these serve to reduce the total instruction path length required to complete the program and/or reduce total memory usage during the process. On the other hand, platform-dependent techniques involve instruction scheduling, instruction-level parallelism, data-level parallelism, cache optimization techniques (i.e., parameters that differ among various platforms) and the optimal instruction scheduling might be different even on different processors of the same architecture.
== Strength reduction ==
Computational tasks can be performed in several different ways with varying efficiency. A more efficient version with equivalent functionality is known as a strength reduction. For example, consider the following C code snippet whose intention is to obtain the sum of all integers from 1 to N:
This code can (assuming no arithmetic overflow) be rewritten using a mathematical formula like:
The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient, while retaining the same functionality. See algorithmic efficiency for a discussion of some of these techniques. However, a significant improvement in performance can often be achieved by removing extraneous functionality.
Optimization is not always an obvious or intuitive process. In the example above, the "optimized" version might actually be slower than the original version if N were sufficiently small and the particular hardware happens to be much faster at performing addition and looping operations than multiplication and division.
== Trade-offs ==
In some cases, however, optimization relies on using more elaborate algorithms, making use of "special cases" and special "tricks" and performing complex trade-offs. A "fully optimized" program might be more difficult to comprehend and hence may contain more faults than unoptimized versions. Beyond eliminating obvious antipatterns, some code level optimizations decrease maintainability.
Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a trade-off – where one factor is optimized at the expense of others. For example, increasing the size of cache improves run time performance, but also increases the memory consumption. Other common trade-offs include code clarity and conciseness.
There are instances where the programmer performing the optimization must decide to make the software better for some operations but at the cost of making other operations less efficient. These trade-offs may sometimes be of a non-technical nature – such as when a competitor has published a benchmark result that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to as pessimizations.
== Bottlenecks ==
Optimization may include finding a bottleneck in a system – a component that is the limiting factor on performance. In terms of code, this will often be a hot spot – a critical part of the code that is the primary consumer of the needed resource – though it can be another factor, such as I/O latency or network bandwidth.
In computer science, resource consumption often follows a form of power law distribution, and the Pareto principle can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations. In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).
More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit, and thus a hybrid algorithm or adaptive algorithm may be faster than any single algorithm. A performance profiler can be used to narrow down decisions about which functionality fits which conditions.
In some cases, adding more memory can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use.
== When to optimize ==
Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of the development stage.
Donald Knuth made the following two statements on optimization:
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"
(He also attributed the quote to Tony Hoare several years later, although this might have been an error as Hoare disclaims having coined the phrase.)
"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"
"Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing.
When deciding whether to optimize a specific part of the program, Amdahl's Law should always be considered: the impact on the overall program depends very much on how much time is actually spent in that specific part, which is not always clear from looking at the code without a performance analysis.
A better approach is therefore to design first, code from the design and then profile/benchmark the resulting code to see which parts should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization.
In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization.
Modern compilers and operating systems are so efficient that the intended performance increases often fail to materialize. As an example, caching data at the application level that is again cached at the operating system level does not yield improvements in execution. Even so, it is a rare case when the programmer will remove failed optimizations from production code. It is also true that advances in hardware will more often than not obviate any potential improvements, yet the obscuring code will persist into the future long after its purpose has been negated.
== Macros ==
Optimization during code development using macros takes on different forms in different languages.
In some procedural languages, such as C and C++, macros are implemented using token substitution. Nowadays, inline functions can be used as a type safe alternative in many cases. In both cases, the inlined function body can then undergo further compile-time optimizations by the compiler, including constant folding, which may move some computations to compile time.
In many functional programming languages, macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use. Since in many cases interpretation is used, that is one way to ensure that such computations are only performed at parse-time, and sometimes the only way.
Lisp originated this style of macro, and such macros are often called "Lisp-like macros". A similar effect can be achieved by using template metaprogramming in C++.
In both cases, work is moved to compile-time. The difference between C macros on one side, and Lisp-like macros and C++ template metaprogramming on the other side, is that the latter tools allow performing arbitrary computations at compile-time/parse-time, while expansion of C macros does not perform any computation, and relies on the optimizer ability to perform it. Additionally, C macros do not directly support recursion or iteration, so are not Turing complete.
As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete.
== Automated and manual optimization ==
See also Category:Compiler optimizations
Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior algorithm.
Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers or system administrators explicitly change code so that the overall system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations. Since many parameters influence the program performance, the program optimization space is large. Meta-heuristics and machine learning are used to address the complexity of program optimization.
Use a profiler (or performance analyzer) to find the sections of the program that are taking the most resources – the bottleneck. Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong. Optimizing an unimportant piece of code will typically do little to help the overall performance.
When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program. More often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.
After the programmer is reasonably sure that the best algorithm is selected, code optimization can start. Loops can be unrolled (for lower loop overhead, although this can often lead to lower speed if it overloads the CPU cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on. (See algorithmic efficiency article for these and other techniques.)
Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a different programming language that gives more direct access to the underlying machine. For example, it is common for very high-level languages like Python to have modules written in C for greater speed. Programs already written in C can have modules written in assembly. Programs written in D can use the inline assembler.
Rewriting sections "pays off" in these circumstances because of a general "rule of thumb" known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So, putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed – if the correct part(s) can be located.
Manual optimization sometimes has the side effect of undermining readability. Thus code optimizations should be carefully documented (preferably using in-line comments), and their effect on future development evaluated.
The program that performs an automated optimization is called an optimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers can often tailor the generated code to specific processors.
Today, automated optimizations are almost exclusively limited to compiler optimization. However, because compiler optimizations are usually limited to a fixed set of rather general optimizations, there is considerable demand for optimizers which can accept descriptions of problem and language-specific optimizations, allowing an engineer to specify custom optimizations. Tools that accept descriptions of optimizations are called program transformation systems and are beginning to be applied to real software systems such as C++.
Some high-level languages (Eiffel, Esterel) optimize their programs by using an intermediate language.
Grid computing or distributed computing aims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time.
== Time taken for optimization ==
Sometimes, the time taken to undertake optimization therein itself may be an issue.
Optimizing existing code usually does not add new features, and worse, it might add new bugs in previously working code (as any change might). Because manually optimized code might sometimes have less "readability" than unoptimized code, optimization might impact maintainability of it as well. Optimization comes at a price and it is important to be sure that the investment is worthwhile.
An automatic optimizer (or optimizing compiler, a program that performs code optimization) may itself have to be optimized, either to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization "turned on" usually takes longer, although this is usually only a problem when programs are quite large.
In particular, for just-in-time compilers the performance of the run time compile component, executing together with its target code, is the key to improving overall execution speed.
== References ==
== Further reading ==
Jon Bentley: Writing Efficient Programs, ISBN 0-13-970251-2.
Donald Knuth: The Art of Computer Programming
How To Write Fast Numerical Code: A Small Introduction
"What Every Programmer Should Know About Memory" by Ulrich Drepper – explains the structure of modern memory subsystems and suggests how to utilize them efficiently
"Linux Multicore Performance Analysis and Optimization in a Nutshell", presentation slides by Philip Mucci
Programming Optimization by Paul Hsieh
Writing efficient programs ("Bentley's Rules") by Jon Bentley
"Performance Anti-Patterns" by Bart Smaalders | Wikipedia/Optimization_(computer_science) |
In computer science, an in-place algorithm is an algorithm that operates directly on the input data structure without requiring extra space proportional to the input size. In other words, it modifies the input in place, without creating a separate copy of the data structure. An algorithm which is not in-place is sometimes called not-in-place or out-of-place.
In-place can have slightly different meanings. In its strictest form, the algorithm can only have a constant amount of extra space, counting everything including function calls and pointers. However, this form is very limited as simply having an index to a length n array requires O(log n) bits. More broadly, in-place means that the algorithm does not use extra space for manipulating the input but may require a small though nonconstant extra space for its operation. Usually, this space is O(log n), though sometimes anything in o(n) is allowed. Note that space complexity also has varied choices in whether or not to count the index lengths as part of the space used. Often, the space complexity is given in terms of the number of indices or pointers needed, ignoring their length. In this article, we refer to total space complexity (DSPACE), counting pointer lengths. Therefore, the space requirements here have an extra log n factor compared to an analysis that ignores the lengths of indices and pointers.
An algorithm may or may not count the output as part of its space usage. Since in-place algorithms usually overwrite their input with output, no additional space is needed. When writing the output to write-only memory or a stream, it may be more appropriate to only consider the working space of the algorithm. In theoretical applications such as log-space reductions, it is more typical to always ignore output space (in these cases it is more essential that the output is write-only).
== Examples ==
Given an array a of n items, suppose we want an array that holds the same elements in reversed order and to dispose of the original. One seemingly simple way to do this is to create a new array of equal size, fill it with copies from a in the appropriate order and then delete a.
function reverse(a[0..n - 1])
allocate b[0..n - 1]
for i from 0 to n - 1
b[n − 1 − i] := a[i]
return b
Unfortunately, this requires O(n) extra space for having the arrays a and b available simultaneously. Also, allocation and deallocation are often slow operations. Since we no longer need a, we can instead overwrite it with its own reversal using this in-place algorithm which will only need constant number (2) of integers for the auxiliary variables i and tmp, no matter how large the array is.
function reverse_in_place(a[0..n-1])
for i from 0 to floor((n-2)/2)
tmp := a[i]
a[i] := a[n − 1 − i]
a[n − 1 − i] := tmp
As another example, many sorting algorithms rearrange arrays into sorted order in-place, including: bubble sort, comb sort, selection sort, insertion sort, heapsort, and Shell sort. These algorithms require only a few pointers, so their space complexity is O(log n).
Quicksort operates in-place on the data to be sorted. However, quicksort requires O(log n) stack space pointers to keep track of the subarrays in its divide and conquer strategy. Consequently, quicksort needs O(log2 n) additional space. Although this non-constant space technically takes quicksort out of the in-place category, quicksort and other algorithms needing only O(log n) additional pointers are usually considered in-place algorithms.
Most selection algorithms are also in-place, although some considerably rearrange the input array in the process of finding the final, constant-sized result.
Some text manipulation algorithms such as trim and reverse may be done in-place.
== In computational complexity ==
In computational complexity theory, the strict definition of in-place algorithms includes all algorithms with O(1) space complexity, the class DSPACE(1). This class is very limited; it equals the regular languages. In fact, it does not even include any of the examples listed above.
Algorithms are usually considered in L, the class of problems requiring O(log n) additional space, to be in-place. This class is more in line with the practical definition, as it allows numbers of size n as pointers or indices. This expanded definition still excludes quicksort, however, because of its recursive calls.
Identifying the in-place algorithms with L has some interesting implications; for example, it means that there is a (rather complex) in-place algorithm to determine whether a path exists between two nodes in an undirected graph, a problem that requires O(n) extra space using typical algorithms such as depth-first search (a visited bit for each node). This in turn yields in-place algorithms for problems such as determining if a graph is bipartite or testing whether two graphs have the same number of connected components.
== Role of randomness ==
In many cases, the space requirements of an algorithm can be drastically cut by using a randomized algorithm. For example, if one wishes to know if two vertices in a graph of n vertices are in the same connected component of the graph, there is no known simple, deterministic, in-place algorithm to determine this. However, if we simply start at one vertex and perform a random walk of about 20n3 steps, the chance that we will stumble across the other vertex provided that it is in the same component is very high. Similarly, there are simple randomized in-place algorithms for primality testing such as the Miller–Rabin primality test, and there are also simple in-place randomized factoring algorithms such as Pollard's rho algorithm.
== In functional programming ==
Functional programming languages often discourage or do not support explicit in-place algorithms that overwrite data, since this is a type of side effect; instead, they only allow new data to be constructed. However, good functional language compilers will often recognize when an object very similar to an existing one is created and then the old one is thrown away, and will optimize this into a simple mutation "under the hood".
Note that it is possible in principle to carefully construct in-place algorithms that do not modify data (unless the data is no longer being used), but this is rarely done in practice.
== See also ==
Table of in-place and not-in-place sorting algorithms
== References == | Wikipedia/In-place_algorithm |
The Design of Everyday Things is a best-selling book by cognitive scientist and usability engineer Donald Norman. Originally published in 1988 with the title The Psychology of Everyday Things, it is often referred to by the initialisms POET and DOET. A new preface was added in 2002 and a revised and expanded edition was published in 2013.
The book's premise is that design serves as the communication between object and user, and discusses how to optimize that conduit of communication in order to make the experience of using the object pleasurable. It argues that although people are often keen to blame themselves when objects appear to malfunction, it is not the fault of the user but rather the lack of intuitive guidance that should be present in the design.
Norman uses case studies to describe the psychology behind what he deems good and bad design, and proposes design principles. The book spans several disciplines including behavioral psychology, ergonomics, and design practice.
== Contents ==
In the book, Norman introduced the term affordance as it applied to design,: 282 borrowing James J. Gibson's concept from ecological psychology. In the revised edition of his book in 2013, he also introduced the concept of signifiers to clarify his definition of affordances. Examples of affordances are doors that can be pushed or pulled. These are the possible interactions between an object and its user. Examples of corresponding signifiers are flat plates on doors meant to be pushed, small finger-size push-buttons, and long and rounded bars we intuitively use as handles. As Norman used the term, a door affords pushing or pulling, and the plate or button signals that it is meant to be pushed, while the bar or handle signals pulling.: 282–3 : 9 Norman discussed door handles at length.: 10, 87–92
He also popularized the term user-centered design, which he had previously referred to in User-Centered System Design in 1986. He used the term to describe design based on the needs of the user, leaving aside, what he deemed secondary issues like aesthetics. User-centered design involves simplifying the structure of tasks, making things visible, getting the mapping right, exploiting the powers of constraint, designing for error, explaining affordances, and seven stages of action. He went to great lengths to define and explain these terms in detail, giving examples following and going against the advice given and pointing out the consequences.
Other topics of the book include:
The Psychopathology of Everyday Things
The Psychology of Everyday Actions
Knowledge in the Head and in the World
Knowing What to Do
To Err Is Human
Human-Centered Design
The Design Challenge
=== Seven stages of action ===
Seven stages of action are described in chapter two of the book. They include four stages of execution, three stages of evaluation:
Forming the target
Forming the intention
Specifying an action
Executing the action
Perceiving the state of the world
Interpreting the state of the world
Evaluating the outcome
==== Building up the Stages ====
The history behind the action cycle starts from a conference in Italy attended by Donald Norman. This excerpt has been taken from the book The Design of Everyday Things:
I am in Italy at a conference. I watch the next speaker attempt to thread a film onto a projector that he never used before. He puts the reel into place, then takes it off and reverses it. Another person comes to help. Jointly they thread the film through the projector and hold the free end, discussing how to put it on the takeup reel. Two more people come over to help and then another. The voices grow louder, in three languages: Italian, German and English. One person investigates the controls, manipulating each and announcing the result. Confusion mounts. I can no longer observe all that is happening. The conference organizer comes over. After a few moments he turns and faces the audience, who had been waiting patiently in the auditorium. "Ahem," he says, "is anybody expert in projectors?" Finally, fourteen minutes after the speaker had started to thread the film (and eight minutes after the scheduled start of the session) a blue-coated technician appears. He scowls, then promptly takes the entire film off the projector, rethreads it, and gets it working.: 45–46
Norman pondered on the reasons that made something like threading of a projector difficult to do. To examine this, he wanted to know what happened when something implied nothing. In order to do that, he examined the structure of an action. So to get something done, a notion of what is wanted – the goal that is to be achieved, needs to be started. Then, something is done to the world i.e. take action to move oneself or manipulate someone or something. Finally, the checking is required if the goal was made. This led to formulation of Stages of Execution and Evaluation.: 46
==== Stages of Execution ====
Execution formally means to perform or do something. Norman explains that a person sitting on an armchair while reading a book at dusk, might need more light when it becomes dimmer and dimmer. To do that, he needs to switch on the button of a lamp i.e. get more light (the goal). To do this, one must need to specify on how to move one's body, how to stretch to reach the light switch and how to extend one's finger to push the button. The goal has to be translated into an intention, which in turn has to be made into an action sequence.
Thus, formulation of stages of execution:
Start at the top with the goal, the state that is to be achieved.
The goal is translated into an intention to do some action.
The intention must be translated into a set of internal commands, an action sequence that can be performed to satisfy the intention.
The action sequence is still a mutual event: nothing happens until it is executed, performed upon the world.
==== Stages of Evaluation ====
Evaluation formally means to examine and calculate. Norman explains that after turning on the light, we evaluate if it is actually turned on. A careful judgement is then passed on how the light has affected our world i.e. the room in which the person is sitting on the armchair while reading a book.
The formulation of the stages of evaluation can be described as:
Evaluation starts with our perception of the world.
This perception must then be interpreted according to our expectations.
Then it is compared (evaluated) with respect to both our intentions and our goals.
==== Gulf of execution ====
The difference between the intentions and the allowable actions is the gulf of execution.
"Consider the movie projector example: one problem resulted from the Gulf of Execution. The person wanted to set up the projector. Ideally, this would be a simple thing to do. But no, a long, complex sequence was required. It wasn't all clear what actions had to be done to accomplish the intentions of setting up the projector and showing the film.": 51
In the gulf of execution is the gap between a user's goal for action and the means to execute that goal. Usability has as one of its primary goals to reduce this gap by removing roadblocks and steps that cause extra thinking and actions that distract the user's attention from the task intended, thereby preventing the flow of his or her work, and decreasing the chance of successful completion of the task.
This can be illustrated through the discussion of a VCR problem. Let us imagine that a user would like to record a television show. They see the solution to this problem as simply pressing the Record button. However, in reality, to record a show on a VCR, several actions must be taken:
Press the record button.
Specify time of recording, usually involving several steps to change the hour and minute settings.
Select channel to record on - either by entering the channel's number or selecting it with up/down buttons.
Save the recording settings, perhaps by pressing an "OK" or "menu" or "enter" button.
The difference between the user's perceived execution actions and the required actions is the gulf of execution.
==== Gulf of evaluation ====
The gulf of evaluation reflects the amount of effort that the person must exert to interpret the physical state of the system and to determine how well the expectations and intentions have been met. In the gulf of evaluation is the degree to which the system or artifact provides representations that can be directly perceived and interpreted in terms of the expectations and intentions of the user.: 51 Or put differently, the gulf of evaluation is the difficulty of assessing the state of the system and how well the artifact supports the discovery and interpretation of that state. In the book, "The gulf is small when the system provides information about its state in a form that is easy to get, is easy to interpret, and matches the way the person thinks of the system".: 51
"In the movie projector example there was also a problem with the Gulf of Evaluation. Even when the film was in the projector, it was difficult to tell if it had been threaded correctly.": 51–52
The gulf of evaluation applies to the gap between an external stimulus and the time a person understands what it means. The gulf of evaluation stands for the psychological gap that must be crossed to interpret a user interface display, following the steps: interface → perception → interpretation → evaluation. Both "gulfs" were first mentioned in Donald Norman's 1986 book User Centered System Design: New Perspectives on Human-computer Interaction.
==== Usage as design aids ====
The seven-stage structure is referenced as design aid to act as a basic checklist for designers' questions to ensure that the Gulfs of Execution and Evaluation are bridged.: 52–53
The seven stages of relationship can be broken down into four main principles of good design:
Visibility – by looking, the user can tell the state of the device and the alternatives for action.
A good conceptual model – The designer provides a good conceptual model for the user, with consistency in the presentation of operations and results and a coherent, consistent system image.
Good mappings – it is possible to determine the relationships between actions and results, between the controls and their effects, and between the system state and what is visible.
Feedback – the user receives full and continuous feedback about the results of the actions.
== Reception ==
After a group of industrial designers felt affronted after reading an early draft, Norman rewrote the book to make it more sympathetic to the profession.
The book was originally published with the title The Psychology of Everyday Things. In his preface to the 2002 edition, Norman has stated that his academic peers liked the original title, but believed the new title better conveyed the content of the book and better attracted interested readers.: ix
== See also ==
Emotional Design
Industrial design
Interaction design
Principles of user interface design
== References ==
== Further reading ==
O'Dwyer, Davin (December 12, 2009). "Grand designs". The Irish Times. Retrieved November 22, 2011. | Wikipedia/The_Design_of_Everyday_Things |
Emotional Design is both the title of a book by Donald Norman and of the concept it represents.
== Content ==
The main topic covered is how emotions have a crucial role in the human ability to understand the world, and how they learn new things. In fact, studies show that emotion influences people's information processing and decision-making For example: aesthetically pleasing objects appear to the user to be more effective, by virtue of their sensual appeal. This is due to the affinity the user feels for an object that appeals to them, due to the formation of an emotional connection with the object. Consequently, It is believed that companies and designers should not rely on pricey marketing; they should link their services to customers' emotions and daily lives to get them "hooked" on a product.
Norman's approach is based on classical ABC model of attitudes. However, he changed the concept to be suitable for application in design. The three dimensions have new names (visceral, behavioral and reflective level) and partially new content .
The first is the "visceral" level which is about immediate initial reactions people unconsciously do and are greatly determined by sensory factors (look, feel, smell, and sound). Norman argued that attractive products work better because they can engage multiple senses to evoke emotional responses and bonds through use of visual factors of color, texture, and shape. He contends that beautifully designed products make people feel good. This is where appearance matters, and first impressions are formed, and the texture and surface of an object become important in evoking a specific emotional reaction. Thus, viscerally well-designed products tend to evoke positive emotions and experiences in the consumers.
The second is "behavioral" level which is all about use; what does a product do, what function does it perform? Good behavioral design should be human centered, focusing upon understanding and satisfying the needs of people who use the product. This level of design starts with understanding the user's demands, ideally derived from conducting studies of relevant behavior in homes, schools, places of work, or wherever the product will be used.
The third is "reflective" level at which the product has meaning for consumers; the emotional connections which are formed over time using the product and are influenced by cultural, social, and personal factors. Via good reflective design, people will feel a sense of personal bond and identity with an object, and it will become a part of their daily lives. It is how we remember the experience itself and how it made us feel.
In summary, the visceral level concerns itself with the aesthetic or attractiveness of an object. The behavioral level considers the function and usability of the product. The reflective level takes into account prestige and value; this is often influenced by the branding of a product.
In the book, Norman shows that design of most objects are perceived on all three levels (dimensions). Therefore, a good design should address all three levels. Norman also mentions in his book that "technology should bring more to our lives than the improved performance of tasks: it should be richness and enjoyment." (pg 101) He stresses the importance of creating fun and pleasurable products instead of dull and dreary ones. By mixing all three design levels and the four pleasures by Patrick W. Jordan, the product should evoke an emotion when the user is interacting with the product. The interaction of these three levels of design leads to the culmination of the "emotional design," a new, holistic approach to designing successful products and creates enduring and delightful product experience.
Emotional design is an important element when generating ideas for human-centered opportunities. People can more easily relate to a product, a service, a system, or an experience when they are able to connect with it at a personal level. Rather than thinking that there is one solution for all, both Norman's three design levels and Jordan's four pleasures of design can help us design for each individual's needs. Both concepts can be used as tools to better connect with the end user that it is being design for. This viewpoint is gaining a lot of acceptance in the business world; for example, Postrel argues that the "look and feel" of people, places, and things are more important than we think. In other words, people today are more concerned with the look and feel of products than with their functionality.
== Cover ==
The front cover of Emotional Design showcases Philippe Starck's Juicy Salif, an icon of industrial design that Norman heralds as an "item of seduction" and the manifestation of his thesis.
== Concept ==
Emotions are a fundamental aspect of human experience, and our emotional responses to people, places, and objects are shaped by a complex interplay of factors. As Peter Boatwright and Jonathan Cagan point out, "emotion is human, and its reach is vast". In the current marketplace, successful companies are not just creating good products, but also producing captivating ones that not only attract consumer attention, but also influence their demands and increase their engagement based on both the product's performance and how it makes them feel.
Emotional design is also influenced by the four pleasures, identified in Designing Pleasurable Products by Patrick W. Jordan. In this book Jordan builds on the work of Lionel Tiger to identify the four kinds of pleasures. Jordan describes these as "modes of motivation that enhance a product or a service. Life is unenjoyable without appreciating what we do, and it is human intuition to seek pleasure." The idea of incorporating pleasure into products is to provide the buyer with an added experience. Jordan points out in his book that a product should be more than something functional and/or aesthetically pleasing and it should evoke an emotion through the use of pleasures. Although it is hard to achieve all four pleasures into one product, by simply focusing on one, it might be what can bring a product from being chosen over another. The four pleasures that could be implemented into products or a service are:
Physio-pleasure deals with the body and pleasure derived from the sensory organs. This includes taste, touch, and smell, as well as sexual and sensual pleasure. In the context of products, these pleasures can be associated with tactile properties (the way interaction with the product feels) or olfactory properties (the leather smell in a new car, for example).
Socio-pleasure is the enjoyment derived from the company of others. Products can facilitate social interaction in a number of ways, either through providing a service that brings people together (a coffee-maker enabling a host to provide their guests with fresh coffee) or by being a talking point in and of itself.
Psycho-pleasure is defined as pleasure which is gained from the accomplishment of a task. In a product context, psycho-pleasure relates to the extent in which a product can help in task completion and make the accomplishment a satisfying experience. This pleasure may also take into account the efficiency with which a task can be completed (a word processor with built-in formatting decreasing the amount of time spent on creating a document, for example).
Ideo-pleasure refers to pleasure derived from theoretical entities such as books, music, and art. It may relate to the aesthetics of a product and the values it embodies. A product made of bio-degradable material, for example, can be seen as holding value in the environment which, in turn, may appeal to someone who wishes to be environmentally responsible.
== The use of emotional design ==
=== In film ===
People mostly know film as an entertainment but film can do more than that. Gianluca Sergi and Alan Lovell cite a study in their essays on cinema entertainment that the film users (the viewers) see films as an escape from reality and a source of amusement, relaxation and knowledge, meaning films also function as an educational tool and a method of stress relief. Specifically, comparing to emotional design, film fulfills the requirements it needs. Firstly, movies have an attractive appearance. Whether movies start with a black and white concept like in Oz the Great and Powerful or an oddly colorful, but serious theme as in Suicide Squad, they usually capture the audiences' attention, who then want to continue watching the whole show. The "wow" reaction that viewers have is the visceral reaction, according to how Don Norman explains the three levels of design in his book Emotional Design: Why We Love (or Hate) Everyday Things, "[w]hen we perceive something as "pretty," that judgment comes directly from the visceral level." (65–66) Secondly, the behavioral level: in a literal sense, the only function of movies is to be watched. With the advancement of technology, movies now have high resolution, as well as various lighting dynamics and camera angles. Lastly, applying Don Norman's statement on how products can add positively to the self-image of the users and how good the users feel after owning the products, film does influence its viewers greatly and affect the way they act. Trice and Greer indicate that "we identify with characters on the screen who are like us in terms of age, sex and other characteristics; we also identify with people we would like to be like.[...] We tend to imitate "good" characters" (135). That being said, movies do not label any of their characters good or bad in a straightforward manner; the viewers only learn about the characters through the narrative, which production design is a part of.
=== In physical space design ===
Emotional design is one of the important aspects of creating a successful and enjoyable experience for customers in a physical space such as Starbucks. Emotional design refers to the ability of design elements to evoke certain emotions or feelings in customers. [13] One example of emotional design at Starbucks is the use of warm lighting, comfortable seating, and relaxing music to create a cozy and inviting atmosphere. This creates a sense of comfort and relaxation, which can be particularly appealing to customers who are looking for a place to unwind or catch up with friends. Another example of emotional design at Starbucks is the use of distinctive and recognizable branding elements, such as the green logo, the mermaid icon, and the signature cup design. These elements create a sense of familiarity and loyalty among customers, who often associate the Starbucks brand with a certain lifestyle or personality.
=== In product design ===
Emotional design has a crucial role in product design, extending beyond a product's functionality and into the realm of meaningful experiences that evoke emotions in users. By introducing emotional cues into the product design, designers can provide users with emotions that create trust, satisfaction, joy, or nostalgia - all of which have an influential way of impacting user perceptions, engagement, and loyalty towards products. Research has shown that when users have an emotional connection to their product, it can enhance how effectively a product is usable, desirable, aesthetically pleasing, and valuable over time.
==== Enhancing Usability Through Emotional Design ====
One prominent impact of emotional design in product development is to improve usability and self-efficacy. Buker et al. outline that when products are designed to evoke positive emotion, it can improve the overall confidence of users performing tasks successfully. Their research found that emotional product design can develop self-efficacy, which is the belief in one's own abilities to exercise behaviors successfully. Products designed with a straightforward interface and emotionally affirming approaches (positive messages reinforced through visuals) can lessen users' frustration and create motivation that leads to more engaging, confident product use. Each of these products' emotional support approaches, exhibit visual appeal, sounds, and tactile interaction or feedback can exhibit product approachability and user empowerment.
Additionally, Buker et al. point out that emotional design works best in combination with usability-centered principles. Easy to use, provides clear feedback, and has aesthetic and pleasant elements; these attributes produce a feeling of competence and satisfaction in users. This loop of emotional usability experience might create a better immediate experience for a user, and build attachment and loyalty long term.
==== Emotional Aesthetics and Sensory Appeal in Product Design ====
The aesthetic aspect of a product is important within the context of emotional design, as visual characteristics can prompt immediate emotional responses. Demirbilek and Sener assert that product semantics and emotional cues are essential to the user's understanding and experience. Their investigations illustrate that certain characteristics of a product's design, such as color, shape, texture, and material, can create different emotional associations. For instance, rounded, smooth shapes tend to suggest comfort and friendliness, whereas sharp, angular shapes may inspire aggression and tension. Designers can use visual and tactile constituents effectively in making emotive products feel like they invoke positive emotional responses, making them more attractive.
In addition, Demirbilek and Sener reveal that emotional design can develop narrative experiences about the products that provide them with symbolic or sentimental value. For example, retro-themed kitchen appliances constructed with retro color palettes and nostalgic form can remind consumers of a time in the past, creating familiarity and an emotional attachment. This experience creates a sense of value to the product, resulting in the likelihood that the product will be remembered and cherished by consumers.
==== The Role of Color Psychology in Emotional Product Design ====
Color is an influential element in emotional product design with direct psychological implications on human feelings. Feng and Zhao state that different colors can convey different emotional responses that directly influence an individual's purchasing intention. The effect of color has been noted in product design, and in their research on pro-environmental product design, they also noted that warm colors (e.g., red, yellow, orange) evoke excitement, energy, and optimism, making these colors good at attracting attention or prompting action. Conversely, cool colors (e.g., blue, green) cultivate calmness, trust, and reliability; hence, they mitigate apprehension in product use and encourage stability in areas where that may lead to a purchasing intention.
Color does not end with aesthetics alone; colors have demonstrable action impacts on user behavior when used appropriately during the design process. For example, technology companies frequently utilize blue as the color of choice in their product and brand development processes to indicate trust and reliability, while green is typically utilized in health and wellness products to elicit natural calming associations. Feng and Zhao state that through the systematic use of color psychology, designers should be able to produce products with visual appeal along with emotional resonance that appeal to users and inform purchasing actions.
=== In education and learning environments ===
Emotional design is being integrated into educational technologies at a progressively growing rate to reduce cognitive load and improve learning experiences. Chang and Chen claim that emotional design in e-textbooks and digital learning technologies can greatly influence a student's learning achievement and cognitive load. When learning environments utilize emotional design elements such as interesting visuals, interactive components, and positive feedback, it can lower the cognitive effort needed to study material, leading to a more productive and enjoyable learning experience.
For example, emotional design can lower cognitive overload in the learning experience mainly due to emotional cues provided to students that positively aid in memory retention and task completion. Chang and Chen also showcased that e-textbooks with emotional design elements provided students with better learning outcomes than traditional textbooks, citing that students' perceived level of engagement and motivation increased. Positive emotional reinforcement, including rewarding progress and praise, is also instrumental in stimulating motivation and persistence in educational environments.
== Relationship between emotion and design ==
Emotion and design are intricately linked in the field of emotional design, which is concerned with creating products, interfaces, and experiences that engage users on an emotional level. Emotions design involves the intentional use of design elements to evoke specific emotional responses in users.
The relationship between emotion and design in emotional design is rooted in the idea that emotions are a key driver of human behavior. People are more likely to engage with products and interfaces that evoke positive emotions such as joy, excitement, and delight, while negative emotions such as frustration and anger can lead to disengagement and avoidance.
In emotional design, designers use a variety of techniques to evoke emotions in users. These may include the use of color, typography, imagery, sound, and motion, among others. For example, a website might use bright, cheerful colors and playful animations to create a sense of fun and whimsy, while a meditation app might use soft, calming colors and soothing sounds to create a sense of relaxation and tranquility.
== Ethical considerations in emotional design ==
While the potential of emotional design is significant, ethical considerations also come into play, especially when it comes to manipulation. Specifically, the emotional triggers that seek to take advantage of a user's fears or insecurities (particularly with so-called "dark patterns") are growing from an ethical standpoint in UX design. Gray et al. reflect on the "dark side" of UX design, where designers intentionally set out to "trick" users or manipulate them into doing things that are not in their best interest. They evaluate not only how designers may coerce users to make a purchase, but also how they use guilt or urgency, to name a few. All of these concerns tie directly back to the responsibility designers cannot forget to account for aggressors using an emotional strategy in a way that does not take into consideration the well-being of the user.
Similarly, Keinonen addresses ethics in design within the scope of satisfying user needs. Here, again, the user cannot be taken advantage of, and therefore combining (or employing) emotional design that is based on how to promote user autonomy and well-being rather than profitability. While ethically employing emotional design, the designer must find the balance between trying to sway users to behave in a manner in which they desire content to evade manipulation (regardless of intention). Cultural sensitivity must be considered by designers, but this also leads to the problem of using one emotional cue as a designer. One emotional appeal cannot faithfully represent all of the different groups, and from an emotional design perspective, a designer's emotional appeal may not be tight enough to reach the audience. Ethical considerations in emotional design highlight the need to act in a way that considers users’ needs in designing an emotional design experience, that serves their best interests while not exploiting who they are emotionally.
== See also ==
Kansei engineering – a design approach incorporating emotional elements
Sustainable design
== References == | Wikipedia/Emotional_design |
Alan Cooper (born June 3, 1952) is an American software designer and programmer. Widely recognized as the "Father of Visual Basic", Cooper is also known for his books About Face: The Essentials of Interaction Design and The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity. As founder of Cooper, a leading interaction design consultancy, he created the Goal-Directed design methodology and pioneered the use of personas as practical interaction design tools to create high-tech products. On April 28, 2017, Alan was inducted into the Computer History Museum's Hall of Fellows "for his invention of the visual development environment in Visual BASIC, and for his pioneering work in establishing the field of interaction design and its fundamental tools."
== Biography ==
=== Early life ===
Alan Cooper grew up in Marin County, California, United States, where he attended the College of Marin, studying architecture. He learned programming and took on contract programming jobs to pay for college.
In 1975, soon after he left college and as the first microcomputers became available, Alan Cooper founded his first company, Structured Systems Group (SSG), in Oakland, California, which became one of the first microcomputer software companies. SSG's software accounting product, General Ledger, was sold through ads in popular magazines such as Byte and Interface Age. This software was, according to the historical account in Fire in the Valley (by Paul Freiberger and Michael Swaine), “probably the first serious business software for microcomputers.” It was both the start of Cooper's career as a software author and the beginning of the microcomputer software business. Ultimately, Cooper developed a dozen original products at Structured Systems Group before he sold his interest in the company in 1980.
Early on, Cooper worked with Gordon Eubanks to develop, debug, document, and publish his business programming language, CBASIC, an early competitor to Bill Gates' and Paul Allen's Microsoft BASIC. Eubanks wrote CBASIC’s precursor, BASIC-E as a student project while at the Naval Postgraduate School in Monterey, California with professor Gary Kildall. When Eubanks left the Navy, he joined Kildall’s successful operating system company, Digital Research, Inc., in Monterey. Soon thereafter, Eubanks and Kildall invited Cooper to join them at Digital Research as one of four founders of their research and development department. After two years at DRI, Cooper departed to develop desktop application software by himself.
During the 1980s, Alan Cooper authored several business applications including Microphone II for Windows and an early, critical-path project management program called SuperProject. Cooper sold SuperProject to Computer Associates in 1984, where it achieved success in the business-to-business marketplace.
=== Visual Basic ===
In 1988, Alan Cooper created a visual programming language (code-named “Ruby”) that allowed Windows users to build “Finder”-like shells. He called it “a shell construction set." After he demonstrated Ruby to Bill Gates, Microsoft purchased it. At the time, Gates commented that the innovation would have a “profound effect” on their entire product line. Microsoft initially decided not to release the product as a shell for users, but rather to transform it into a professional development tool for their QuickBASIC programming language called Visual Basic, which was widely used for business application development for Windows computers.
Cooper's dynamically installable control facility, which became famous as the “VBX” interface, was a well-known component of "Ruby". This innovation allowed any 3rd party developer to write a widget (control) as a DLL, put it in the Visual Basic directory, and Visual Basic would find it, communicate with it, and present it to the user as a seamless part of the program. The widget would appear in the tool palette and appropriate menus, and users could incorporate it into their Visual Basic applications. The invention of the “VBX” interface created an entire new marketplace for vendors of these “dynamically installable controls.” As a result of Cooper's work, many new software companies were able to deliver Windows software to market in the 1990s.
The first book ever written about Visual Basic, The Waite Group’s Visual Basic How-To by Mitchell Waite, is dedicated to Alan Cooper. In his dedication, the author calls Cooper the “Father of Visual Basic.” This nickname has often served as Cooper's one-line resume.
In 1994, Bill Gates presented Cooper with the first Windows Pioneer Award for his contributions to the software industry. During the presentation, Gates took particular note of Cooper's innovative work creating the VBX interface.
In 1998, the SVForum honored Cooper with its Visionary Award.
=== Interaction design and user experience ===
Early in his career, Cooper began to critically consider the accepted approach to software construction. As he reports in his first book, he believed something important was missing—software authors were not asking, “How do users interact with this?” Cooper's early insights drove him to create a design process, focused not on what could be coded but on what could be designed to meet users’ needs.
In 1992, in response to a rapidly consolidating software industry, Cooper began consulting with other companies, helping them design their applications to be more user friendly. Within a few years, Alan Cooper had begun to articulate some of his basic design principles. With his clients, he championed a design methodology that puts the users’ needs first. Cooper interviewed the users of his client's products and discovered the common threads that made these people happy. Born of this practice was the use of personas as design tools. Cooper preached his vision in two books. His ideas helped to drive the user experience movement and define the craft that would come to be called “interaction design.”
==== About Face ====
Cooper's best-selling first book, About Face: The Essentials of User Interface Design, was first published in 1995. In it, Cooper introduces a comprehensive set of practical design principles, essentially a taxonomy for software design. By the second edition, as the industry and profession evolved, “interface design” had become the more precise “interaction design.” The basic message of this book was directed at programmers: Do the right thing. Think about your users. The book is now in its fourth edition, entitled About Face: The Essentials of Interaction Design, and is considered a foundation text for the professional interaction designer. Cooper introduced the ideas of software application posture such as a "sovereign posture" where an application uses most of the space and waits for user input or a "transient posture" for software that does not run or engage with the user all the time. With websites he discusses "informational" and "transactional" postures.
==== The Inmates Are Running the Asylum ====
In his 1998 book, The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity, Alan Cooper outlined his methodology, called Goal-Directed design, based on the concept that software should speed the user towards his or her ultimate goal rather than ensnare him or her in computer minutiae. In the book, Cooper introduced a new concept that he called personas as a practical interaction design tool. Based on a brief discussion in the book, personas rapidly gained popularity in the software industry due to their unusual power and effectiveness. Today, the concepts of interaction design strategy and the use of personas have been broadly adopted across the industry. Cooper directs the message of his second book to the businessperson: know your users’ goals and how to satisfy them. You need interaction design to do the thing right. Cooper advocates for integrating design into business practice in order to meet customer needs and to build better products faster by doing it right the first time.
Alan Cooper's current focus is on how to effectively integrate the advances of interaction design with the effectiveness of agile software development methods. Cooper regularly speaks and blogs about this on his company's website.
=== Cooper ===
Cooper has a user experience design and strategy consulting firm headquartered in San Francisco with an office in New York. Cooper is credited with inventing several widely used design concepts, including goal-directed design, personas, and pair design. It was founded by Sue Cooper and Alan Cooper in 1992 in Menlo Park, California, under the name 'Cooper Software,' then changing the name to 'Cooper Interaction Design' in 1997. Cooper was the first consulting firm dedicated solely to interaction design. Its original clients were mainly Silicon Valley software and computer hardware companies.
The company uses a human-centered methodology called “goal-directed design” that emphasizes the importance of understanding the user's desired end-state and their motivations for getting there.
In 2002, Cooper began offering training classes to the public including topic as interaction design, service design, visual design, and design leadership.
Cooper has served as the President of Cooper (formerly Cooper Interaction Design), a user experience and interaction design consultancy in San Francisco, California, since its founding in 1992. Cooper helps their customers with interaction design challenges and offers training courses in software design and development topics, including their Goal-Directed design (under the CooperU brand).
In 2017, Cooper became part of Designit, a strategic design arm of Wipro Digital. Cooper Professional Education continued to exist as a teaching and learning division of Designit until it closed its doors to business on May 29, 2020.
== Bibliography ==
About Face: The Essentials of User Interface Design (ISBN 1-56884-322-4), 1995
The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity (ISBN 0-672-31649-8), 1998
About Face 2.0: The Essentials of Interaction Design (with Robert Reimann) (ISBN 0-7645-2641-3), 2003
About Face 3: The Essentials of Interaction Design (with Robert Reimann and David Cronin) (ISBN 0-4700-8411-1), 2007
About Face: The Essentials of Interaction Design, 4th Edition (with Robert Reimann, David Cronin, and Christopher Noessel) (ISBN 978-1118766576), 2014
== See also ==
Application posture
Design methods
Design thinking
Interaction design
User centered design
User experience design
Windows Pioneers
== References ==
== External links ==
Profile at Cooper.com Archived 2016-11-26 at the Wayback Machine
Article, Alexa, please kill me now: My thoughts on conversational UI
Agile 2008 interview, “Similarities Between Interaction Designers and Agile Programmers”
Interview, UX Podcast, Ranch Stories with Alan Cooper
Interview, Alan Cooper Interview on .NET Rocks
Interview, Conversation with Alan Cooper at Microsoft's Channel 9
Article, Alan Cooper on why he has been called "the Father of Visual Basic"
Interview, SEOV: Visions of Alan Cooper (Video Interviews) Archived 2008-04-17 at the Wayback Machine
Discussion, Alan Cooper on what companies must do to improve software products - mp3 format
Article, Alan Cooper and the Goal Directed Design Process—Gain AIGA Journal of Design for the Network Economy, 2001
Software Development Forum's Software Visionary Award
Interview, Triangulation 262: Alan Cooper
Article, "Tech Republic" The Church of Usability, Alan K'necht
Article, Dr. Dobbs Special Report 1997 (re. Gary Kildall), Michael Swaine
"History of CBASIC". Archived from the original on 2006-05-04. Retrieved 2006-05-04.
Encyclopedia entry, Structured Systems Group (Britannica.com)
Interview, Why People Yell at Their Computer Monitors and Hate Microsoft's Clippy | Wikipedia/Alan_Cooper_(software_designer) |
Elsevier ( EL-sə-veer) is a Dutch academic publishing company specializing in scientific, technical, and medical content. Its products include journals such as The Lancet, Cell, the ScienceDirect collection of electronic journals, Trends, the Current Opinion series, the online citation database Scopus, the SciVal tool for measuring research performance, the ClinicalKey search engine for clinicians, and the ClinicalPath evidence-based cancer care service. Elsevier's products and services include digital tools for data management, instruction, research analytics, and assessment. Elsevier is part of the RELX Group, known until 2015 as Reed Elsevier, a publicly traded company. According to RELX reports, in 2022 Elsevier published more than 600,000 articles annually in over 2,800 journals. As of 2018, its archives contained over 17 million documents and 40,000 e-books, with over one billion annual downloads.
Researchers have criticized Elsevier for its high profit margins and copyright practices. The company had a reported profit before tax of £2.295 billion with an adjusted operating margin of 33.1% in 2023. Much of the research that Elsevier publishes is publicly funded; its high costs have led to accusations of rent-seeking, boycotts against them, and the rise of alternate avenues for publication and access, such as preprint servers and shadow libraries.
== History ==
Elsevier was founded in 1880 and adopted the name and logo from the Dutch publishing house Elzevir that was an inspiration but has no connection to the contemporary Elsevier. The Elzevir family operated as booksellers and publishers in the Netherlands; the founder, Lodewijk Elzevir (1542–1617), lived in Leiden and established that business in 1580. As a company logo, Elsevier used the Elzevir family's printer's mark, a tree entwined with a vine and the words Non Solus, which is Latin for "not alone". According to Elsevier, this logo represents "the symbiotic relationship between publisher and scholar".
The expansion of Elsevier in the scientific field after 1945 was funded with the profits of the newsweekly Elsevier, which published its first issue on 27 October 1945. The weekly was an instant success and very profitable. The weekly was a continuation, as is stated in its first issue, of the monthly Elsevier, which was founded in 1891 to promote the name of the publishing house and had to stop publication in December 1940 because of the German occupation of the Netherlands.
In May 1939 Klautz established the Elsevier Publishing Company Ltd. in London to distribute these academic titles in the British Commonwealth (except Canada). When the Nazis occupied the Netherlands for the duration of five years from May 1940, he had just founded a second international office, the Elsevier Publishing Company Inc. in New York.
In 1947, Elsevier began publishing its first English-language journal, Biochimica et Biophysica Acta.
In 1970, Elsevier acquired competing firm North-
Holland. In 1971 the firm acquired Excerpta Medica,a small medical abstract publisher based in Amsterdam. As the first and only company in the world that employed a database for the production of journals, it introduced computer technology to Elsevier. In 1978 Elsevier merged with Dutch newspaper publisher NDU, and devised a strategy to broadcast textual news to people's television sets through Viewdata and Teletext technology.
In 1979 Elsevier Science Publishers launched the Article Delivery Over Network Information System (ADONIS) project in conjunction with four business partners. The project aims to find a way to deliver scientific articles to libraries electronically, and would continue for over a decade. In 1991, in conjunction with nine American universities, Elsevier's The University Licensing Project (TULIP) was the first step in creating published, copyrighted material available over the Internet. It formed the basis for ScienceDirect, launched six years later. In 1997, after almost two decades of experiments, ScienceDirect was launched as the first online repository of electronic (scientific) books and articles. Though librarians and researchers were initially hesitant regarding the new technology, more and more of them switched to e-only subscriptions.
In 2004 Elsevier launched Scopus - a multidisciplinary metadata database of scholarly publications, only the second of such kind (after the Web of Science, although free Google Scholar was also launched in 2004). Scopus covers journals, some conference papers and books from various publishers, and measures performance on both author and publication levels. In 2009 SciVal Spotlight was released. This tool enabled research administrators to measure their institution's relative standing in terms of productivity, grants, and publications.
In 2013, Elsevier acquired Mendeley, a UK company making software for managing and sharing research papers. Mendeley, previously an open platform for sharing of research, was greatly criticized for the sale, which users saw as acceding to the "paywall" approach to research literature. Mendeley's previously open-sharing system now allows exchange of paywalled resources only within private groups. The New Yorker described Elsevier's reasons for buying Mendeley as two-fold: to acquire its user data, and to "destroy or coöpt an open-science icon that threatens its business model".
== Company statistics ==
During 2018, researchers submitted over 1.8 million research papers to Elsevier-based publications. Over 20,000 editors managed the peer review and selection of these papers, resulting in the publication of more than 470,000 articles in over 2,500 journals. Editors are generally unpaid volunteers who perform their duties alongside a full-time job in academic institutions, although exceptions have been reported. In 2013, the five editorial groups Elsevier, Springer, Wiley-Blackwell, Taylor & Francis, and SAGE Publications published more than half of all academic papers in the peer-reviewed literature. At that time, Elsevier accounted for 16% of the world market in science, technology, and medical publishing. In 2019, Elsevier accounted for the review, editing and dissemination of 18% of the world's scientific articles. About 45% of revenue by geography in 2019 derived from North America, 24% from Europe, and the remaining 31% from the rest of the world. Around 84% of revenue by format came from electronic usage and 16% came from print.
The firm employs 8,100 people. The CEO is Kumsal Bayazit, who was appointed on 15 February 2019. In 2018, it reported a mean 2017 gender pay gap of 29.1% for its UK workforce, while the median was 40.4%, the highest yet reported by a publisher in UK. Elsevier attributed the result to the under-representation of women in its senior ranks and the prevalence of men in its technical workforce. The UK workforce consists of 1,200 people in the UK, and represents 16% of Elsevier's global employee population. Elsevier's parent company, RELX, has a global workforce that is 51% female to 49% male, with 43% female and 57% male managers, and 29% female and 71% male senior operational managers.
In 2018, Elsevier accounted for 34% of the revenues of RELX group (£2.538 billion of £7.492 billion). In operating profits, it represented 40% (£942 million of £2,346 million). Adjusted operating profits (with constant currency) rose by 2% from 2017 to 2018. Profits grew further from 2018 to 2019, to a total of £982 million. the first half of 2019, RELX reported the first slowdown in revenue growth for Elsevier in several years: 1% vs. an expectation of 2% and a typical growth of at least 4% in the previous 5 years. Overall for 2019, Elsevier reported revenue growth of 3.9% from 2018, with the underlying growth at constant currency at 2%. In 2019, Elsevier accounted for 34% of the revenues of RELX (£2.637billion of £7.874billion). In adjusted operating profits, it represented 39% (£982m of £2.491bn). Adjusted operating profits (with constant currency) rose by 2% from 2018 to 2019.
In 2019, researchers submitted over two million research papers to Elsevier-based publications. Over 22,000 editors managed the peer review and selection of these papers, resulting in the publication of about 500,000 articles in over 2,500 journals.
In 2020 Elsevier was the largest academic publisher, with approximately 16% of the academic publishing market and more than 3000 journals.
== Market model ==
=== Products and services ===
Products and services include electronic and print versions of journals, textbooks and reference works, and cover the health, life, physical, and social sciences.
The target markets are academic and government research institutions, corporate research labs, booksellers, librarians, scientific researchers, authors, editors, physicians, nurses, allied health professionals, medical and nursing students and schools, medical researchers, pharmaceutical companies, hospitals, and research establishments. It publishes in 13 languages including English, German, French, Spanish, Italian, Portuguese, Polish, Japanese, Hindi, and Chinese.
Flagship products and services include VirtualE, ScienceDirect, Scopus, Scirus, EMBASE, Engineering Village, Compendex, Cell, Knovel, SciVal, Pure, and Analytical Services, The Consult series (FirstCONSULT, PathCONSULT, NursingCONSULT, MDConsult, StudentCONSULT), Virtual Clinical Excursions, and major reference works such as Gray's Anatomy, Nelson Pediatrics, Dorland's Illustrated Medical Dictionary, Netter's Atlas of Human Anatomy, and online versions of many journals including The Lancet.
ScienceDirect is Elsevier's platform for online electronic access to its journals and over 40,000 e-books, reference works, book series, and handbooks. The articles are grouped in four main sections: Physical Sciences and Engineering, Life Sciences, Health Sciences, and Social Sciences and Humanities. For most articles on the website, abstracts are freely available; access to the full text of the article (in PDF, and also HTML for newer publications) often requires a subscription or pay-per-view purchase.
In 2019, Elsevier published 49,000 free open access articles and 370 full open access journals. Moreover, 1,900 of its journals sold hybrid open access options.
=== Pricing ===
The subscription rates charged by the company for its journals have been criticized; some very large journals (with more than 5,000 articles) charge subscription prices as high as £9,634, far above average, and many British universities pay more than a million pounds to Elsevier annually. The company has been criticized not only by advocates of a switch to the open-access publication model, but also by universities whose library budgets make it difficult for them to afford current journal prices.
For example, in 2004, a resolution by Stanford University's senate singled out Elsevier's journals as being "disproportionately expensive compared to their educational and research value", which librarians should consider dropping, and encouraged its faculty "not to contribute articles or editorial or review efforts to publishers and journals that engage in exploitive or exorbitant pricing". Similar guidelines and criticism of Elsevier's pricing policies have been passed by the University of California, Harvard University, and Duke University.
In July 2015, the Association of Universities in the Netherlands threatened to boycott Elsevier, which refused to negotiate on any open access policy for Dutch universities. After a year of negotiation, Elsevier pledged to make 30% of research published by Dutch researchers in Elsevier journals open access by 2018.
In October 2018, a complaint against Elsevier was filed with the European Commission, alleging anticompetitive practices stemming from Elsevier's confidential subscription agreements and market dominance. The European Commission decided not to investigate.
The elevated pricing of field journals in economics, most of which are published by Elsevier, was one of the motivations that moved the American Economic Association to launch the American Economic Journal in 2009.
=== Mergers and acquisitions ===
RELX Group has been active in mergers and acquisitions. Elsevier has incorporated other businesses that were either complementing or competing in the field of research and publishing and that reinforce its market power, such as Mendeley (after the closure of 2collab), SSRN, bepress/Digital Commons, PlumX, Hivebench, Newsflo, Science-Metrix, and Interfolio.
=== Conferences ===
Elsevier also conducts conferences, exhibitions, and workshops around the world, with over 50 conferences a year covering life sciences, physical sciences and engineering, social sciences, and health sciences.
=== Shill review offer ===
According to the BBC, in 2009, the firm [Elsevier] offered a £17.25 Amazon voucher to academics who contributed to the textbook Clinical Psychology if they would go on Amazon.com and Barnes & Noble (a large US books retailer) and give it five stars. Elsevier responded by stating "Encouraging interested parties to post book reviews isn't outside the norm in scholarly publishing, nor is it wrong to offer to nominally compensate people for their time. But in all instances the request should be unbiased, with no incentives for a positive review, and that's where this particular e-mail went too far", and that it was a mistake by a marketing employee.
=== Blocking text mining research ===
Elsevier seeks to regulate text and data mining with private licenses, claiming that reading requires extra permission if automated and that the publisher holds copyright on output of automated processes. The conflict on research and copyright policy has often resulted in researchers being blocked from their work. In November 2015, Elsevier blocked a scientist from performing text mining research at scale on Elsevier papers, even though his institution already pays for access to Elsevier journal content. The data was collected using the R package "statcheck".
=== Fossil fuel company consulting and advocacy ===
Elsevier is one of the most prolific publishers of books aimed at expanding the production of fossil fuels. Since at least 2010 the company has worked with the fossil fuel industry to optimise fossil fuel extraction. It commissions authors, journal advisory board members and editors who are employees of the largest oil firms. In addition it markets data services and research portals directly to the fossil fuel industry to help "increase the odds of exploration success".
== Relationship with academic institutions ==
=== Finland ===
In 2015, Finnish research organizations paid a total of 27 million euros in subscription fees. Over one-third of the total costs went to Elsevier. The information was revealed after successful court appeal following a denied request on the subscription fees, due to confidentiality clauses in contracts with the publishers. Establishing of this fact lead to creation of tiedonhinta.fi petition demanding more reasonable pricing and open access to content signed by more than 2800 members of the research community. While deals with other publishers have been made, this was not the case for Elsevier, leading to the nodealnoreview.org boycott of the publisher signed more than 600 times.
In January 2018, it was confirmed that a deal had been reached between those concerned.
=== France ===
The French Couperin consortium agreed in 2019 to a 4-year contract with Elsevier, despite criticism from the scientific community.
The French École Normale Supérieure has stopped having Elsevier publish the journal Annales Scientifiques de l'École Normale Supérieure (as of 2008).
Effective on 1 January 2020, the French Academy of Sciences stopped publishing its 7 journals Comptes rendus de l'Académie des Sciences with Elsevier and switched to Centre Mersenne.
=== Germany ===
Since 2018 and as of 2023, almost no academic institution in Germany is subscribed to Elsevier.
Germany's DEAL project (Projekt DEAL), which includes over 60 major research institutions, has announced that all of its members are cancelling their contracts with Elsevier, effective 1 January 2017. The boycott is in response to Elsevier's refusal to adopt "transparent business models" to "make publications more openly accessible". Horst Hippler, spokesperson for the DEAL consortium states that "taxpayers have a right to read what they are paying for" and that "publishers must understand that the route to open-access publishing at an affordable price is irreversible". In July 2017, another 13 institutions announced that they would also be cancelling their subscriptions to Elsevier journals. In August 2017, at least 185 German institutions had cancelled their contracts with Elsevier. In 2018, whilst negotiations were ongoing, around 200 German universities that cancelled their subscriptions to Elsevier journals were granted complimentary open access to them until this ended in July of the year.
On 19 December 2018, the Max Planck Society (MPS) announced that the existing subscription agreement with Elsevier would not be renewed after the expiration date of 31 December 2018. MPS counts 14,000 scientists in 84 research institutes, publishing 12,000 articles each year.
In 2023 Elsevier and DEAL reached a tentative agreement on a publish and read model, which would take effect until 2028 if at least 70% of the eligible institutions opt into it.
=== Hungary ===
In March 2018, the Hungarian Electronic Information Service National Programme entered negotiations on its 2019 Elsevier subscriptions, asking for a read-and-publish deal. Negotiations were ended by the Hungarian consortium in December 2018, and the subscription was not renewed.
=== Iran ===
In 2013, Elsevier changed its policies in response to sanctions announced by the US Office of Foreign Assets Control that year. This included a request that all Elsevier journals avoid publishing papers by Iranian nationals who are employed by the Iranian government. Elsevier executive Mark Seeley expressed regret on behalf of the company, but did not announce an intention to challenge this interpretation of the law.
=== Italy ===
CRUI (an association of Italian universities) sealed a 5-year-long deal for 2018–2022, despite protests from the scientific community, protests focused on aspects such as the lack of prevention of cost increases by means of the double dipping.
=== Netherlands ===
In 2015, a consortium of all of Netherlands' 14 universities threatened to boycott Elsevier if it could not agree that articles by Dutch authors would be made open access and settled with the compromise of 30% of its Dutch papers becoming open access by 2018. Gerard Meijer, president of Radboud University in Nijmegen and lead negotiator on the Dutch side noted, "it's not the 100% that I hoped for".
=== Norway ===
In March 2019, the Norwegian government on behalf of 44 institutions — universities, university colleges, research institutes, and hospitals — decided to break negotiations on renewal of their subscription deal with Elsevier, because of disagreement regarding open-access policy and Elsevier's unwillingness to reduce the cost of reading access.
=== South Korea ===
In 2017, over 70 university libraries confirmed a "contract boycott" movement involving three publishers including Elsevier. As of January 2018, whilst negotiations remain underway, a decision will be made as to whether or not continue the participating libraries will continue the boycott. It was subsequently confirmed that an agreement had been reached.
=== Sweden ===
In May 2018, the Bibsam Consortium, which negotiates license agreements on behalf of all Swedish universities and research institutes, decided not to renew their contract with Elsevier, alleging that the publisher does not meet the demands of transition towards a more open-access model, and referring to the rapidly increasing costs for publishing. Swedish universities will still have access to articles published before 30 June 2018. Astrid Söderbergh Widding, chairman of the Bibsam Consortium, said, "the current system for scholarly communication must change and our only option is to cancel deals when they don't meet our demands for a sustainable transition to open access". Sweden has a goal of open access by 2026. In November 2019 the negotiations concluded, with Sweden paying for reading access to Elsevier journals and open access publishing for all its researchers' articles.
=== Taiwan ===
In Taiwan, more than 75% of universities, including the country's top 11 institutions, have joined a collective boycott against Elsevier. On 7 December 2016, the Taiwanese consortium, CONCERT, which represents more than 140 institutions, announced it would not renew its contract with Elsevier.
=== United States ===
In March 2018, Florida State University's faculty elected to cancel its $2 million subscription to a bundle of several journals. Starting in 2019, it will instead buy access to titles à la carte.
In February 2019, the University of California said it would terminate subscriptions "in [a] push for open access to publicly funded research". After months of negotiations over open access to research by UC researchers and prices for subscriptions to Elsevier journals, a press release by the UC Office of the President issued Thursday, 28 February 2019 stated "Under Elsevier's proposed terms, the publisher would have charged UC authors large publishing fees on top of the university's multimillion dollar subscription, resulting in much greater cost to the university and much higher profits for Elsevier." On 10 July 2019, Elsevier began restricting access to all new paywalled articles and approximately 5% of paywalled articles published before 2019.
In April 2020, the University of North Carolina elected not to renew its bundled Elsevier package, citing a failure "to provide an affordable path". Rather than extend the license, which was stated to cost $2.6 million annually, the university decided to continue subscribing to a smaller set of individual journals. The State University of New York Libraries Consortium also announced similar outcome, with the help of estimates from Unpaywall Journals. Similarly, MIT announced in June 2020 that it would no longer pay for access to new Elsevier articles.
In 2022 Elsevier and the University of Michigan established an agreement to support authors who wish to publish open access.
=== Ukraine ===
In June 2020 the Ukrainian government cancelled subscriptions for all universities in the country after failed negotiations. The Ministry of Education claimed that Elsevier indexes journals in its register that call themselves Russian but are from "occupied territories".
== Criticism of academic practices ==
=== Lacking dissemination of its research ===
==== Lobbying efforts against open access ====
Elsevier have been known to be involved in lobbying against open access. These have included the likes of:
The Federal Research Public Access Act (FRPAA)
The Research Works Act
PRISM. In the case of PRISM, the Association of American Publishers hired Eric Dezenhall, the so-called "Pit Bull Of Public Relations"
Horizon 2020
Office of Science and Technology Policy (OSTP)
The European Union's Open Science Monitor was criticised after Elsevier were confirmed as a subcontractor
UK Research and Innovation.
===== Selling open-access articles =====
In 2014, 2015, 2016, and 2017, Elsevier was found to be selling some articles that should have been open access, but had been put behind a paywall. A related case occurred in 2015, when Elsevier charged for downloading an open-access article from a journal published by John Wiley & Sons. However, whether Elsevier was in violation of the license under which the article was made available on their website was not clear.
===== Action against academics posting their own articles online =====
In 2013, Digimarc, a company representing Elsevier, told the University of Calgary to remove articles published by faculty authors on university web pages; although such self-archiving of academic articles may be legal under the fair dealing provisions in Canadian copyright law, the university complied. Harvard University and the University of California, Irvine also received takedown notices for self-archived academic articles, a first for Harvard, according to Peter Suber.
Months after its acquisition of Academia.edu rival Mendeley, Elsevier sent thousands of takedown notices to Academia.edu, a practice that has since ceased following widespread complaint by academics, according to Academia.edu founder and chief executive Richard Price.
After Elsevier acquired the repository SSRN in May 2016, academics started complaining that some of their work has been removed without notice. The action was explained as a technical error.
===== Sci-Hub and LibGen lawsuit =====
In 2015, Elsevier filed a lawsuit against the sites Sci-Hub and LibGen, which make copyright-protected articles available for free. Elsevier also claimed illegal access to institutional accounts.
===== Initial rejection of the Initiative for Open Citations =====
Among the major academic publishers, Elsevier alone declined to join the Initiative for Open Citations. In the context of the resignation of the Journal of Informetrics' editorial board, the firm stated: "Elsevier invests significantly in citation extraction technology. While these are made available to those who wish to license this data, Elsevier cannot make such a large corpus of data, to which it has added significant value, available for free."
Elsevier finally joined the initiative in January 2021 after the data was already available with an Open Data Commons license in Microsoft Academic.
===== ResearchGate take down =====
A chamber of the Munich Regional Court has ruled that the research networking site ResearchGate has to take down articles uploaded without consent from their original publishers and ResearchGate must take down Elsevier articles. A case was brought forward in 2017 by the Coalition for Responsible Sharing, a group of publishers that includes Elsevier and the American Chemical Society.
===== Resignation of editorial boards =====
The editorial boards of a number of journals have resigned because of disputes with Elsevier over pricing:
In 1999, the entire editorial board of the Journal of Logic Programming resigned after 16 months of unsuccessful negotiations with Elsevier about the price of library subscriptions. The personnel created a new journal, Theory and Practice of Logic Programming, with Cambridge University Press at a much lower price, while Elsevier continued publication with a new editorial board and a slightly different name (the Journal of Logic and Algebraic Programming).
In 2002, dissatisfaction at Elsevier's pricing policies caused the European Economic Association to terminate an agreement with Elsevier designating Elsevier's European Economic Review as the official journal of the association. The EEA launched a new journal, the Journal of the European Economic Association.
In 2003, the entire editorial board of the Journal of Algorithms resigned to start ACM Transactions on Algorithms with a different, lower-priced, not-for-profit publisher, at the suggestion of Journal of Algorithms founder Donald Knuth. The Journal of Algorithms continued under Elsevier with a new editorial board until October 2009, when it was discontinued.
In 2005, the editors of the International Journal of Solids and Structures resigned to start the Journal of Mechanics of Materials and Structures. However, a new editorial board was quickly established and the journal continues in apparently unaltered form.
In 2006, the entire editorial board of the distinguished mathematical journal Topology resigned because of stalled negotiations with Elsevier to lower the subscription price. This board then launched the new Journal of Topology at a far lower price, under the auspices of the London Mathematical Society. Topology then remained in circulation under a new editorial board until 2009.
In 2023, the editorial board of the open access journal NeuroImage resigned and started a new journal, because of Elsevier's unwillingness to reduce article-processing charges. The editors called Elsevier's $3,450 per article processing charge "unethical and unsustainable".
Editorial boards have also resigned over open access policies or other issues:
In 2015, Stephen Leeder was removed from his role as editor of the Medical Journal of Australia when its publisher decided to outsource the journal's production to Elsevier. As a consequence, all but one of the journal's editorial advisory committee members co-signed a letter of resignation.
In 2015, the entire editorial staff of the general linguistics journal Lingua resigned in protest of Elsevier's unwillingness to agree to their terms of Fair Open Access. Editor-in-chief Johan Rooryck also announced that the Lingua staff would establish a new journal, Glossa.
In 2019, the entire editorial board of Elsevier's Journal of Informetrics resigned over the open-access policies of its publisher and founded open-access journal called Quantitative Science Studies.
In 2020, Elsevier effectively severed the tie between the Journal of Asian Economics and the academic society that founded it, the American Committee on Asian Economic Studies (ACAES), by offering the ACAES-appointed editor, Calla Wiemer, a terminal contract for 2020. As a result, a majority of the editorial board eventually resigned.
In 2023, the editorial board of the journal Design Studies resigned over Elsevier's 1) plans to increase publications seven-fold; 2) the appointment of an external Editor-in-Chief who had not previously published in the journal; and 3) changing the scope of the journal without consulting the editorial team or the journal's parent society.
In December 2024, the editorial board of Journal of Human Evolution, including emeritus editors and all but one associate editor, resigned, citing actions by Elsevier that they said "are fundamentally incompatible with the ethos of the journal and preclude maintaining the quality and integrity fundamental to JHE's success". In addition to pricing, specific complaints also included interference in the editorial board, lack of necessary support from the company, and the disruptive use of generative artificial intelligence by the company to alter submissions without informing editors or contributors.
===== "The Cost of Knowledge" boycott =====
In 2003, various university librarians began coordinating with each other to complain about Elsevier's "big deal" journal bundling packages, in which the company offered a group of journal subscriptions to libraries at a certain rate, but in which librarians claimed no economical option was available to subscribe to only the popular journals at a rate comparable to the bundled rate. Librarians continued to discuss the implications of the pricing schemes, many feeling pressured into buying the Elsevier packages without other options.
On 21 January 2012, mathematician Timothy Gowers publicly announced he would boycott Elsevier, noting that others in the field have been doing so privately. The reasons for the boycott are high subscription prices for individual journals, bundling subscriptions to journals of different value and importance, and Elsevier's support for SOPA, PIPA, and the Research Works Act, which would have prohibited open-access mandates for U.S. federally-funded research and severely restricted the sharing of scientific data.
Following this, a petition advocating noncooperation with Elsevier (that is, not submitting papers to Elsevier journals, not refereeing articles in Elsevier journals, and not participating in journal editorial boards), appeared on the site "The Cost of Knowledge". By February 2012, this petition had been signed by over 5,000 academics, growing to over 17,000 by November 2018. The firm disputed the claims, claiming that their prices are below the industry average, and stating that bundling is only one of several different options available to buy access to Elsevier journals. The company also claimed that its profit margins are "simply a consequence of the firm's efficient operation". The academics replied that their work was funded by public money, thus should be freely available.
On 27 February 2012, Elsevier issued a statement on its website that declared that it has withdrawn support from the Research Works Act. Although the Cost of Knowledge movement was not mentioned, the statement indicated the hope that the move would "help create a less heated and more productive climate" for ongoing discussions with research funders. Hours after Elsevier's statement, the sponsors of the bill, US House Representatives Darrell Issa and Carolyn Maloney, issued a joint statement saying that they would not push the bill in Congress.
===== Plan S open-access initiative =====
The Plan S open-access initiative, which began in Europe and has since spread to some US research funding agencies, would require researchers receiving some grants to publish in open-access journals by 2020. A spokesman for Elsevier said "If you think that information should be free of charge, go to Wikipedia". In September 2018, UBS advised to sell Elsevier (RELX) stocks, noting that Plan S could affect 5-10% of scientific funding and may force Elsevier to reduce pricing.
=== "Who's Afraid of Peer Review" ===
In 2013, one of Elsevier's journals was caught in the sting set up by John Bohannon, published in Science, called "Who's Afraid of Peer Review?" The journal Drug Invention Today accepted an obviously bogus paper made up by Bohannon that should have been rejected by any good peer-review system. Instead, Drug Invention Today was among many open-access journals that accepted the fake paper for publication. As of 2014, this journal had been transferred to a different publisher.
=== Fake journals ===
At a 2009 court case in Australia where Merck & Co. was being sued by a user of Vioxx, the plaintiff alleged that Merck had paid Elsevier to publish the Australasian Journal of Bone and Joint Medicine, which had the appearance of being a peer-reviewed academic journal but in fact contained only articles favourable to Merck drugs. Merck described the journal as a "complimentary publication", denied claims that articles within it were ghost written by Merck, and stated that the articles were all reprinted from peer-reviewed medical journals. In May 2009, Elsevier Health Sciences CEO Hansen released a statement regarding Australia-based sponsored journals, conceding that they were "sponsored article compilation publications, on behalf of pharmaceutical clients, that were made to look like journals and lacked the proper disclosures". The statement acknowledged that it "was an unacceptable practice". The Scientist reported that, according to an Elsevier spokesperson, six sponsored publications "were put out by their Australia office and bore the Excerpta Medica imprint from 2000 to 2005", namely the Australasian Journal of Bone and Joint Medicine (Australas. J. Bone Joint Med.), the Australasian Journal of General Practice (Australas. J. Gen. Pract.), the Australasian Journal of Neurology (Australas. J. Neurol.), the Australasian Journal of Cardiology (Australas. J. Cardiol.), the Australasian Journal of Clinical Pharmacy (Australas. J. Clin. Pharm.), and the Australasian Journal of Cardiovascular Medicine (Australas. J. Cardiovasc. Med.). Excerpta Medica was a "strategic medical communications agency" run by Elsevier, according to the imprint's web page. In October 2010, Excerpta Medica was acquired by Adelphi Worldwide.
==== Chaos, Solitons & Fractals ====
There was speculation that the editor-in-chief of Elsevier journal Chaos, Solitons & Fractals, Mohamed El Naschie, misused his power to publish his own work without appropriate peer review. The journal had published 322 papers with El Naschie as author since 1993. The last issue of December 2008 featured five of his papers. The controversy was covered extensively in blogs. The publisher announced in January 2009 that El Naschie had retired as editor-in-chief. As of November 2011 the co-Editors-in-Chief of the journal were Maurice Courbage and Paolo Grigolini. In June 2011, El Naschie sued the journal Nature for libel, claiming that his reputation had been damaged by their November 2008 article about his retirement, which included statements that Nature had been unable to verify his claimed affiliations with certain international institutions. The suit came to trial in November 2011 and was dismissed in July 2012, with the judge ruling that the article was "substantially true", contained "honest comment", and was "the product of responsible journalism". The judgement noted that El Naschie, who represented himself in court, had failed to provide any documentary evidence that his papers had been peer-reviewed. Judge Victoria Sharp also found "reasonable and serious grounds" for suspecting that El Naschie used a range of false names to defend his editorial practice in communications with Nature, and described this behavior as "curious" and "bizarre".
=== Plagiarism ===
Elsevier's 'Duties of Authors' states that authors should ensure they have written entirely original works, and that proper acknowledgement of others' work must always be given. Elsevier claims plagiarism in all its forms constitutes unethical behaviour. Some Elsevier journals automatically screen submissions for plagiarism, but not all.
Albanian politician Taulant Muka claimed that Elsevier journal Procedia had plagiarized in the abstract of one of its articles. It is unclear whether or not Muka had access to the entirety of the article.
=== Scientific racism ===
Angela Saini has criticized the two Elsevier journals Intelligence and Personality and Individual Differences for having included on their editorial boards such well-known proponents of scientific racism as Richard Lynn and Gerhard Meisenberg; in response to her inquiries, Elsevier defended their presence as editors. The journal Intelligence has been criticized for having "occasionally included papers with pseudoscientific findings about intelligence differences between races". It is the official journal of the International Society for Intelligence Research, which organizes the controversial series of conferences London Conference on Intelligence, described by the New Statesman as a forum for scientific racism.
In response to a 2019 open letter, efforts by Retraction Watch and a petition, on 17 June 2020 Elsevier announced it was retracting an article that J. Philippe Rushton and Donald Templer published in 2012 in the Elsevier journal Personality and Individual Differences. The article had claimed that there was scientific evidence that skin color was related to aggression and sexuality in humans.
=== Manipulation of bibliometrics ===
According to the signatories of the San Francisco Declaration on Research Assessment (see also Goodhart's law), commercial academic publishers benefit from manipulation of bibliometrics and scientometrics, such as the journal impact factor. The impact factor, which is often used as a proxy of prestige, can influence revenues, subscriptions, and academics' willingness to contribute unpaid work. However, there's evidence suggesting that reliability of published research works in several fields may decrease with increasing journal rank.
Nine Elsevier journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction that hit 34 journals in total.
In 2023, the International Journal of Hydrogen Energy, which is published by Elsevier, was criticized for desk-rejecting a submitted article for the main reason that it did not cite enough articles from the same journal.
One of their journals, Journal of Analytical and Applied Pyrolysis, was involved in the manipulation of the peer review report.
=== Conflict of interest ===
Elsevier is a publisher of climate change research, but they partnered with the fossil fuel industry. Climate scientists are concerned that this conflict of interest could undermine the credibility of climate science because they believe that fossil fuel extraction and climate action are incompatible.
== Antitrust lawsuit ==
In September 2024, Lucina Uddin, a neuroscience professor at UCLA, sued Elsevier along with five other academic journal publishers in a proposed class-action lawsuit, alleging that the publishers violated antitrust law by agreeing not to compete against each other for manuscripts and by denying scholars payment for peer review services.
== Awards ==
Elsevier has partnered with a number of organisations and lent its name to several awards.
Since 1987, Elsevier has partnered with the academic journal Spectrochimica Acta Part B to award the Elsevier / Spectrochimica Acta Atomic Spectroscopy Award. This award is given each year for a jury-selected best paper of the year. The award is worth $1000.
Starting in 1987, the IBMS Elsevier Award was awarded in 1992, 1995, 1998, 2001, 2003, 2005, 2007 by the International Bone and Mineral Society in partnership with Elsevier, "for outstanding research and teaching throughout their career by an IBMS member in the fields of bone and mineral metabolism".
From 2007, the Coordenação de Aperfeicoamento de Pessoal de Nível Superior (CAPES) in Brazil partnered with Elsevier to award the CAPES Elsevier Award, the award being restricted to women from 2013 to encourage more women to pursue scientific careers. Several awards were awarded each year, as of 2014.
From 2011, the OWSD-Elsevier Foundation Awards for Early-Career Women Scientists in the Developing World (OWSD-Elsevier Foundation Awards) have been awarded annually to early-career women scientists in selected developing countries in four regions: Latin America and the Caribbean, East and Southeast Asia and the Pacific, Central and South Asia, and Sub-Saharan Africa. The Organization for Women in Science for the Developing World (OWSD), the Elsevier Foundation, and The World Academy of Sciences first partnered to recognize achievements of early-career women scientists in developing countries in 2011.
In 2016, the Elsevier Foundation awarded the Elsevier Foundation-ISC3 Green and Sustainable Chemistry Challenge. From 2021 and as of 2024, the annual award is known as the Elsevier Foundation Chemistry for Climate Action Challenge. Two prizes have been awarded each year; until 2020, the first prizewinner was awarded €50,000, and the second prize was €25,000. Since then, €25,000 has been awarded to each winner, usually an entrepreneur who has created a project or proposal that aids the fight against climate change.
== Imprints ==
Elsevier uses its imprints (that is, brand names used in publishing) to market to different consumer segments. Many of the imprints have previously been the names of publishing companies that were purchased by Reed Elsevier.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
Official website
Campaign success: Reed Elsevier sells international arms fairs Archived 6 August 2018 at the Wayback Machine
Mary H. Munroe (2004). "Reed Elsevier Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on 20 October 2014 – via Northern Illinois University. | Wikipedia/Elsevier_Science |
User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that clearly communicate to the user what's important. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design). User-centered design is typically accomplished through the execution of modern design thinking which involves empathizing with the target audience, defining a problem statement, ideating potential solutions, prototyping wireframes, and testing prototypes in order to refine final interface mockups.
User interfaces are the points of interaction between users and designs.
== Three types of user interfaces ==
Graphical user interfaces (GUIs)
Users interact with visual representations on a computer's screen. The desktop is an example of a GUI.
Interfaces controlled through voice
Users interact with these through their voices. Most smart assistants, such as Siri on smartphones or Alexa on Amazon devices, use voice control.
Interactive interfaces utilizing gestures
Users interact with 3D design environments through their bodies, e.g., in virtual reality (VR) games.
Interface design is involved in a wide range of projects, from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered on their expertise, whether it is software design, user research, web design, or industrial design.
Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design and typography are utilized to support its usability, influencing how the user performs certain interactions and improving the aesthetic appeal of the design; design aesthetics may enhance or detract from the ability of users to use the functions of the interface. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
== UI design vs. UX design ==
Compared to UX design, UI design is more about the surface and overall look of a design. User interface design is a craft in which designers perform an important function in creating the user experience. UI design should keep users informed about what is happening, giving appropriate feedback in a timely manner. The visual look and feel of UI design sets the tone for the user experience. On the other hand, the term UX design refers to the entire process of creating a user experience.
Don Norman and Jakob Nielsen said: It's important to distinguish the total user experience from the user interface (UI), even though the UI is obviously an extremely important part of the design. As an example, consider a website with movie reviews. Even if the UI for finding a film is perfect, the UX will be poor for a user who wants information about a small independent release if the underlying database only contains movies from the major studios.
== Design thinking ==
User interface design requires a good understanding of user needs. It mainly focuses on the needs of the platform and its user expectations. There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. The modern design thinking framework was created in 2004 by David M. Kelley, the founder of Stanford’s d.school, formally known as the Hasso Plattner Institute of Design. EDIPT is a common acronym used to describe Kelley’s design thinking framework—it stands for empathize, define, ideate, prototype, and test. Notably, the EDIPT framework is non-linear, therefore a UI designer may jump from one stage to another when developing a user-centric solution. Iteration is a common practice in the design thinking process; successful solutions often require testing and tweaking to ensure that the product fulfills user needs.
=== EDIPT ===
Empathize
Conducting user research to better understand the needs and pain points of the target audience. UI designers should avoid developing solutions based on personal beliefs and instead seek to understand the unique perspectives of various users. Qualitative data is often gathered in the form of semi-structured interviews.
Common areas of interest include:
What would the user want the system to do?
How would the system fit in with the user's normal workflow or daily activities?
How technically savvy is the user and what similar systems does the user already use?
What interface aesthetics and functionalities styles appeal to the user?
Define
Solidifying a problem statement that focuses on user needs and desires; effective problem statements are typically one sentence long and include the user, their specific need, and their desired outcome or goal.
Ideate
Brainstorming potential solutions to address the refined problem statement. The proposed solutions should ideally align with the stakeholders' feasibility and viability criteria while maintaining user desirability standards.
Prototype
Designing potential solutions of varying fidelity (low, mid, and high) while applying user experience principles and methodologies. Prototyping is an iterative process where UI designers should explore multiple design solutions rather than settling on the initial concept.
Test
Presenting the prototypes to the target audience to gather feedback and gain insights for improvement. Based on the results, UI designers may need to revisit earlier stages of the design process to enhance the prototype and user experience.
== Usability testing ==
The Nielsen Norman Group, co-founded by Jakob Nielsen and Don Norman in 1998, promotes user experience and interface design education. Jakob Nielsen pioneered the interface usability movement and created the "10 Usability Heuristics for User Interface Design." Usability is aimed at defining an interface’s quality when considering ease of use; an interface with low usability will burden a user and hinder them from achieving their goals, resulting in the dismissal of the interface. To enhance usability, user experience researchers may conduct usability testing—a process that evaluates how users interact with an interface. Usability testing can provide insight into user pain points by illustrating how efficiently a user can complete a task without error, highlighting areas for design improvement.
Usability inspection
Letting an evaluator inspect a user interface. This is generally considered to be cheaper to implement than usability testing (see step below), and can be used early on in the development process since it can be used to evaluate prototypes or specifications for the system, which usually cannot be tested on users. Some common usability inspection methods include cognitive walkthrough, which focuses the simplicity to accomplish tasks with the system for new users, heuristic evaluation, in which a set of heuristics are used to identify usability problems in the UI design, and pluralistic walkthrough, in which a selected group of people step through a task scenario and discuss usability issues.
Usability testing
Testing of the prototypes on an actual user—often using a technique called think aloud protocol where the user is asked to talk about their thoughts during the experience. User interface design testing allows the designer to understand the reception of the design from the viewer's standpoint, and thus facilitates creating successful applications.
== Requirements ==
The dynamic characteristics of a system are described in terms of the dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface.
=== Seven dialogue principles ===
Suitability for the task
The dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task.
Self-descriptiveness
The dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request.
Controllability
The dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met.
Conformity with user expectations
The dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions.
Error tolerance
The dialogue is error-tolerant if, despite evident errors in input, the intended result may be achieved with either no or minimal action by the user.
Suitability for individualization
The dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user.
Suitability for learning
The dialogue is suitable for learning when it supports and guides the user in learning to use the system.
The concept of usability is defined of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user.
Part 11 gives the following definition of usability:
Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness).
The resources that have to be expended to achieve the intended goals (efficiency).
The extent to which the user finds the overall system acceptable (satisfaction).
Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures.
The information presented is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information (abbreviation, colour, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes.
=== Seven presentation attributes ===
Clarity
The information content is conveyed quickly and accurately.
Discriminability
The displayed information can be distinguished accurately.
Conciseness
Users are not overloaded with extraneous information.
Consistency
A unique design, conformity with user's expectation.
Detectability
The user's attention is directed towards information required.
Legibility
Information is easy to read.
Comprehensibility
The meaning is clearly understandable, unambiguous, interpretable, and recognizable.
=== Usability ===
The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use.
User guidance can be given by the following five means:
Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input.
Feedback informing about the user's input timely, perceptible, and non-intrusive.
Status information indicating the continuing state of the application, the system's hardware and software components, and the user's activities.
Error management including error prevention, error correction, user support for error management, and error messages.
On-line help for system-initiated and user-initiated requests with specific information for the current context of use.
== Research ==
User interface design has been a topic of considerable research, including on its aesthetics. Standards have been developed as far back as the 1980s for defining the usability of software products.
One of the structural bases has become the IFIP user interface reference model.
The model proposes four dimensions to structure the user interface:
The input/output dimension (the look)
The dialogue dimension (the feel)
The technical or functional dimension (the access to tools and services)
The organizational dimension (the communication and co-operation support)
This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability.
The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use. Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.
Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's law, host very complex interfaces.
== See also ==
== References == | Wikipedia/Interface_design |
The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor. A CU typically uses a binary decoder to convert coded instructions into timing and control signals that direct the operation of the other units (memory, arithmetic logic unit and input and output devices, etc.).
Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction.
== Multicycle control units ==
The simplest computers use a multicycle microarchitecture. These were the earliest designs. They are still popular in the very smallest computers, such as the embedded systems that operate machinery.
In a computer, the control unit often steps through the instruction cycle successively. This consists of fetching the instruction, fetching the operands, decoding the instruction, executing the instruction, and then writing the results back to memory. When the next instruction is placed in the control unit, it changes the behavior of the control unit to complete the instruction correctly. So, the bits of the instruction directly control the control unit, which in turn controls the computer.
The control unit may include a binary counter to tell the control unit's logic what step it should do.
Multicycle control units typically use both the rising and falling edges of their square-wave timing clock. They operate a step of their operation on each edge of the timing clock, so that a four-step operation completes in two clock cycles. This doubles the speed of the computer, given the same logic family.
Many computers have two different types of unexpected events. An interrupt occurs because some type of input or output needs software attention in order to operate correctly. An exception is caused by the computer's operation. One crucial difference is that the timing of an interrupt cannot be predicted. Another is that some exceptions (e.g. a memory-not-available exception) can be caused by an instruction that needs to be restarted.
Control units can be designed to handle interrupts in one of two typical ways. If a quick response is most important, a control unit is designed to abandon work to handle the interrupt. In this case, the work in process will be restarted after the last completed instruction. If the computer is to be very inexpensive, very simple, very reliable, or to get more work done, the control unit will finish the work in process before handling the interrupt. Finishing the work is inexpensive, because it needs no register to record the last finished instruction. It is simple and reliable because it has the fewest states. It also wastes the least amount of work.
Exceptions can be made to operate like interrupts in very simple computers. If virtual memory is required, then a memory-not-available exception must retry the failing instruction.
It is common for multicycle computers to use more cycles. Sometimes it takes longer to take a conditional jump, because the program counter has to be reloaded. Sometimes they do multiplication or division instructions by a process, something like binary long multiplication and division. Very small computers might do arithmetic, one or a few bits at a time. Some other computers have very complex instructions that take many steps.
== Pipelined control units ==
Many medium-complexity computers pipeline instructions. This design is popular because of its economy and speed.
In a pipelined computer, instructions flow through the computer. This design has several stages. For example, it might have one stage for each step of the Von Neumann cycle. A pipelined computer usually has "pipeline registers" after each stage. These store the bits calculated by a stage so that the logic gates of the next stage can use the bits to do the next step.
It is common for even numbered stages to operate on one edge of the square-wave clock, while odd-numbered stages operate on the other edge. This speeds the computer by a factor of two compared to single-edge designs.
In a pipelined computer, the control unit arranges for the flow to start, continue, and stop as a program commands. The instruction data is usually passed in pipeline registers from one stage to the next, with a somewhat separated piece of control logic for each stage. The control unit also assures that the instruction in each stage does not harm the operation of instructions in other stages. For example, if two stages must use the same piece of data, the control logic assures that the uses are done in the correct sequence.
When operating efficiently, a pipelined computer will have an instruction in each stage. It is then working on all of those instructions at the same time. It can finish about one instruction for each cycle of its clock. When a program makes a decision, and switches to a different sequence of instructions, the pipeline sometimes must discard the data in process and restart. This is called a "stall." When two instructions could interfere, sometimes the control unit must stop processing a later instruction until an earlier instruction completes. This is called a "pipeline bubble" because a part of the pipeline is not processing instructions. Pipeline bubbles can occur when two instructions operate on the same register.
Interrupts and unexpected exceptions also stall the pipeline. If a pipelined computer abandons work for an interrupt, more work is lost than in a multicycle computer. Predictable exceptions do not need to stall. For example, if an exception instruction is used to enter the operating system, it does not cause a stall.
For the same speed of electronic logic, a pipelined computer can execute more instructions per second than a multicycle computer. Also, even though the electronic logic has a fixed maximum speed, a pipelined computer can be made faster or slower by varying the number of stages in the pipeline. With more stages, each stage does less work, and so the stage has fewer delays from the logic gates.
A pipelined model of a computer often has less logic gates per instruction per second than multicycle and out-of-order computers. This is because the average stage is less complex than a multicycle computer. An out-of-order computer usually has large amounts of idle logic at any given instant. Similar calculations usually show that a pipelined computer uses less energy per instruction.
However, a pipelined computer is usually more complex and more costly than a comparable multicycle computer. It typically has more logic gates, registers and a more complex control unit. In a like way, it might use more total energy, while using less energy per instruction. Out-of-order CPUs can usually do more instructions per second because they can do several instructions at once.
== Preventing stalls ==
Control units use many methods to keep a pipeline full and avoid stalls. For example, even simple control units can assume that a backwards branch, to a lower-numbered, earlier instruction, is a loop, and will be repeated. So, a control unit with this design will always fill the pipeline with the backwards branch path. If a compiler can detect the most frequently-taken direction of a branch, the compiler can just produce instructions so that the most frequently taken branch is the preferred direction of branch. In a like way, a control unit might get hints from the compiler: Some computers have instructions that can encode hints from the compiler about the direction of branch.
Some control units do branch prediction: A control unit keeps an electronic list of the recent branches, encoded by the address of the branch instruction. This list has a few bits for each branch to remember the direction that was taken most recently.
Some control units can do speculative execution, in which a computer might have two or more pipelines, calculate both directions of a branch, and then discard the calculations of the unused direction.
Results from memory can become available at unpredictable times because very fast computers cache memory. That is, they copy limited amounts of memory data into very fast memory. The CPU must be designed to process at the very fast speed of the cache memory. Therefore, the CPU might stall when it must access main memory directly. In modern PCs, main memory is as much as three hundred times slower than cache.
To help this, out-of-order CPUs and control units were developed to process data as it becomes available. (See next section)
But what if all the calculations are complete, but the CPU is still stalled, waiting for main memory? Then, a control unit can switch to an alternative thread of execution whose data has been fetched while the thread was idle. A thread has its own program counter, a stream of instructions and a separate set of registers. Designers vary the number of threads depending on current memory technologies and the type of computer. Typical computers such as PCs and smart phones usually have control units with a few threads, just enough to keep busy with affordable memory systems. Database computers often have about twice as many threads, to keep their much larger memories busy. Graphic processing units (GPUs) usually have hundreds or thousands of threads, because they have hundreds or thousands of execution units doing repetitive graphic calculations.
When a control unit permits threads, the software also has to be designed to handle them. In general-purpose CPUs like PCs and smartphones, the threads are usually made to look very like normal time-sliced processes. At most, the operating system might need some awareness of them. In GPUs, the thread scheduling usually cannot be hidden from the application software, and is often controlled with a specialized subroutine library.
== Out of order control units ==
A control unit can be designed to finish what it can. If several instructions can be completed at the same time, the control unit will arrange it. So, the fastest computers can process instructions in a sequence that can vary somewhat, depending on when the operands or instruction destinations become available. Most supercomputers and many PC CPUs use this method. The exact organization of this type of control unit depends on the slowest part of the computer.
When the execution of calculations is the slowest, instructions flow from memory into pieces of electronics called "issue units." An issue unit holds an instruction until both its operands and an execution unit are available. Then, the instruction and its operands are "issued" to an execution unit. The execution unit does the instruction. Then the resulting data is moved into a queue of data to be written back to memory or registers. If the computer has multiple execution units, it can usually do several instructions per clock cycle.
It is common to have specialized execution units. For example, a modestly priced computer might have only one floating-point execution unit, because floating point units are expensive. The same computer might have several integer units, because these are relatively inexpensive, and can do the bulk of instructions.
One kind of control unit for issuing uses an array of electronic logic, a "scoreboard" that detects when an instruction can be issued. The "height" of the array is the number of execution units, and the "length" and "width" are each the number of sources of operands. When all the items come together, the signals from the operands and execution unit will cross. The logic at this intersection detects that the instruction can work, so the instruction is "issued" to the free execution unit. An alternative style of issuing control unit implements the Tomasulo algorithm, which reorders a hardware queue of instructions. In some sense, both styles utilize a queue. The scoreboard is an alternative way to encode and reorder a queue of instructions, and some designers call it a queue table.
With some additional logic, a scoreboard can compactly combine execution reordering, register renaming and precise exceptions and interrupts. Further it can do this without the power-hungry, complex content-addressable memory used by the Tomasulo algorithm.
If the execution is slower than writing the results, the memory write-back queue always has free entries. But what if the memory writes slowly? Or what if the destination register will be used by an "earlier" instruction that has not yet issued? Then the write-back step of the instruction might need to be scheduled. This is sometimes called "retiring" an instruction. In this case, there must be scheduling logic on the back end of execution units. It schedules access to the registers or memory that will get the results.
Retiring logic can also be designed into an issuing scoreboard or a Tomasulo queue, by including memory or register access in the issuing logic.
Out of order controllers require special design features to handle interrupts. When there are several instructions in progress, it is not clear where in the instruction stream an interrupt occurs. For input and output interrupts, almost any solution works. However, when a computer has virtual memory, an interrupt occurs to indicate that a memory access failed. This memory access must be associated with an exact instruction and an exact processor state, so that the processor's state can be saved and restored by the interrupt. A usual solution preserves copies of registers until a memory access completes.
Also, out of order CPUs have even more problems with stalls from branching, because they can complete several instructions per clock cycle, and usually have many instructions in various stages of progress. So, these control units might use all of the solutions used by pipelined processors.
== Translating control units ==
Some computers translate each single instruction into a sequence of simpler instructions. The advantage is that an out of order computer can be simpler in the bulk of its logic, while handling complex multi-step instructions. x86 Intel CPUs since the Pentium Pro translate complex CISC x86 instructions to more RISC-like internal micro-operations.
In these, the "front" of the control unit manages the translation of instructions. Operands are not translated. The "back" of the CU is an out-of-order CPU that issues the micro-operations and operands to the execution units and data paths.
== Control units for low-powered computers ==
Many modern computers have controls that minimize power usage. In battery-powered computers, such as those in cell-phones, the advantage is longer battery life. In computers with utility power, the justification is to reduce the cost of power, cooling or noise.
Most modern computers use CMOS logic. CMOS wastes power in two common ways: By changing state, i.e. "active power", and by unintended leakage. The active power of a computer can be reduced by turning off control signals. Leakage current can be reduced by reducing the electrical pressure, the voltage, making the transistors with larger depletion regions or turning off the logic completely.
Active power is easier to reduce because data stored in the logic is not affected. The usual method reduces the CPU's clock rate. Most computer systems use this method. It is common for a CPU to idle during the transition to avoid side-effects from the changing clock.
Most computers also have a "halt" instruction. This was invented to stop non-interrupt code so that interrupt code has reliable timing. However, designers soon noticed that a halt instruction was also a good time to turn off a CPU's clock completely, reducing the CPU's active power to zero. The interrupt controller might continue to need a clock, but that usually uses much less power than the CPU.
These methods are relatively easy to design, and became so common that others were invented for commercial advantage. Many modern low-power CMOS CPUs stop and start specialized execution units and bus interfaces depending on the needed instruction. Some computers even arrange the CPU's microarchitecture to use transfer-triggered multiplexers so that each instruction only utilises the exact pieces of logic needed.
One common method is to spread the load to many CPUs, and turn off unused CPUs as the load reduces. The operating system's task switching logic saves the CPUs' data to memory. In some cases, one of the CPUs can be simpler and smaller, literally with fewer logic gates. So, it has low leakage, and it is the last to be turned off, and the first to be turned on. Also it then is the only CPU that requires special low-power features. A similar method is used in most PCs, which usually have an auxiliary embedded CPU that manages the power system. However, in PCs, the software is usually in the BIOS, not the operating system.
Theoretically, computers at lower clock speeds could also reduce leakage by reducing the voltage of the power supply. This affects the reliability of the computer in many ways, so the engineering is expensive, and it is uncommon except in relatively expensive computers such as PCs or cellphones.
Some designs can use very low leakage transistors, but these usually add cost. The depletion barriers of the transistors can be made larger to have less leakage, but this makes the transistor larger and thus both slower and more expensive. Some vendors use this technique in selected portions of an IC by constructing low leakage logic from large transistors that some processes provide for analog circuits. Some processes place the transistors above the surface of the silicon, in "fin fets", but these processes have more steps, so are more expensive. Special transistor doping materials (e.g. hafnium) can also reduce leakage, but this adds steps to the processing, making it more expensive. Some semiconductors have a larger band-gap than silicon. However, these materials and processes are currently (2020) more expensive than silicon.
Managing leakage is more difficult, because before the logic can be turned-off, the data in it must be moved to some type of low-leakage storage.
Some CPUs make use of a special type of flip-flop (to store a bit) that couples a fast, high-leakage storage cell to a slow, large (expensive) low-leakage cell. These two cells have separated power supplies. When the CPU enters a power saving mode (e.g. because of a halt that waits for an interrupt), data is transferred to the low-leakage cells, and the others are turned off. When the CPU leaves a low-leakage mode (e.g. because of an interrupt), the process is reversed.
Older designs would copy the CPU state to memory, or even disk, sometimes with specialized software. Very simple embedded systems sometimes just restart.
== Integrating with the Computer ==
All modern CPUs have control logic to attach the CPU to the rest of the computer. In modern computers, this is usually a bus controller. When an instruction reads or writes memory, the control unit either controls the bus directly, or controls a bus controller. Many modern computers use the same bus interface for memory, input and output. This is called "memory-mapped I/O". To a programmer, the registers of the I/O devices appear as numbers at specific memory addresses. x86 PCs use an older method, a separate I/O bus accessed by I/O instructions.
A modern CPU also tends to include an interrupt controller. It handles interrupt signals from the system bus. The control unit is the part of the computer that responds to the interrupts.
There is often a cache controller to cache memory. The cache controller and the associated cache memory is often the largest physical part of a modern, higher-performance CPU. When the memory, bus or cache is shared with other CPUs, the control logic must communicate with them to assure that no computer ever gets out-of-date old data.
Many historic computers built some type of input and output directly into the control unit. For example, many historic computers had a front panel with switches and lights directly controlled by the control unit. These let a programmer directly enter a program and debug it. In later production computers, the most common use of a front panel was to enter a small bootstrap program to read the operating system from disk. This was annoying. So, front panels were replaced by bootstrap programs in read-only memory.
Most PDP-8 models had a data bus designed to let I/O devices borrow the control unit's memory read and write logic. This reduced the complexity and expense of high speed I/O controllers, e.g. for disk.
The Xerox Alto had a multitasking microprogrammable control unit that performed almost all I/O. This design provided most of the features of a modern PC with only a tiny fraction of the electronic logic. The dual-thread computer was run by the two lowest-priority microthreads. These performed calculations whenever I/O was not required. High priority microthreads provided (in decreasing priority) video, network, disk, a periodic timer, mouse, and keyboard. The microprogram did the complex logic of the I/O device, as well as the logic to integrate the device with the computer. For the actual hardware I/O, the microprogram read and wrote shift registers for most I/O, sometimes with resistor networks and transistors to shift output voltage levels (e.g. for video). To handle outside events, the microcontroller had microinterrupts to switch threads at the end of a thread's cycle, e.g. at the end of an instruction, or after a shift-register was accessed. The microprogram could be rewritten and reinstalled, which was very useful for a research computer.
== Functions of the control unit ==
Thus a program of instructions in memory will cause the CU to configure a CPU's data flows to manipulate the data correctly between instructions. This results in a computer that could run a complete program and require no human intervention to make hardware changes between instructions (as had to be done when using only punch cards for computations before stored programmed computers with CUs were invented).
== Hardwired control unit ==
Hardwired control units are implemented through use of combinational logic units, featuring a finite number of gates that can generate specific results based on the instructions that were used to invoke those responses. Hardwired control units are generally faster than the microprogrammed designs.
This design uses a fixed architecture—it requires changes in the wiring if the instruction set is modified or changed. It can be convenient for simple, fast computers.
A controller that uses this approach can operate at high speed; however, it has little flexibility. A complex instruction set can overwhelm a designer who uses ad hoc logic design.
The hardwired approach has become less popular as computers have evolved. Previously, control units for CPUs used ad hoc logic, and they were difficult to design.
== Microprogram control unit ==
The idea of microprogramming was introduced by Maurice Wilkes in 1951 as an intermediate level to execute computer program instructions. Microprograms were organized as a sequence of microinstructions and stored in special control memory. The algorithm for the microprogram control unit, unlike the hardwired control unit, is usually specified by flowchart description. The main advantage of a microprogrammed control unit is the simplicity of its structure. Outputs from the controller are by microinstructions. The microprogram can be debugged and replaced similarly to software.
== Combination methods of design ==
A popular variation on microcode is to debug the microcode using a software simulator. Then, the microcode is a table of bits. This is a logical truth table, that translates a microcode address into the control unit outputs. This truth table can be fed to a computer program that produces optimized electronic logic. The resulting control unit is almost as easy to design as microprogramming, but it has the fast speed and low number of logic elements of a hard wired control unit. The practical result resembles a Mealy machine or Richards controller.
== See also ==
Processor design
Computer architecture
Richards controller
Controller (computing)
== References == | Wikipedia/Hardwired_control_unit |
In the domain of central processing unit (CPU) design, hazards are problems with the instruction pipeline in CPU microarchitectures when the next instruction cannot execute in the following clock cycle, and can potentially lead to incorrect computation results. Three common types of hazards are data hazards, structural hazards, and control hazards (branching hazards).
There are several methods used to deal with hazards, including pipeline stalls/pipeline bubbling, operand forwarding, and in the case of out-of-order execution, the scoreboarding method and the Tomasulo algorithm.
== Background ==
Instructions in a pipelined processor are performed in several stages, so that at any given time several instructions are being processed in the various stages of the pipeline, such as fetch and execute. There are many different instruction pipeline microarchitectures, and instructions may be executed out-of-order. A hazard occurs when two or more of these simultaneous (possibly out of order) instructions conflict.
== Types ==
=== Structural hazards ===
A structural hazard occurs when two (or more) instructions that are already in pipeline need the same resource. The result is that instruction must be executed in series rather than parallel for a portion of pipeline. Structural hazards are sometimes referred to as resource hazards.
Example:
A situation in which multiple instructions are ready to enter the execute instruction phase and there is a single ALU (Arithmetic Logic Unit). One solution to such resource hazard is to increase available resources, such as having multiple ports into main memory and multiple ALU (Arithmetic Logic Unit) units.
=== Control hazards (branch hazards or instruction hazards) ===
Control hazard occurs when the pipeline makes wrong decisions on branch prediction and therefore brings instructions into the pipeline that must subsequently be discarded. The term branch hazard also refers to a control hazard.
== Eliminating hazards ==
=== Generic ===
==== Pipeline bubbling ====
Bubbling the pipeline, also termed a pipeline break or pipeline stall, is a method to preclude data, structural, and branch hazards. As instructions are fetched, control logic determines whether a hazard could/will occur. If this is true, then the control logic inserts no operations (NOPs) into the pipeline. Thus, before the next instruction (which would cause the hazard) executes, the prior one will have had sufficient time to finish and prevent the hazard. If the number of NOPs equals the number of stages in the pipeline, the processor has been cleared of all instructions and can proceed free from hazards. All forms of stalling introduce a delay before the processor can resume execution.
Flushing the pipeline occurs when a branch instruction jumps to a new memory location, invalidating all prior stages in the pipeline. These prior stages are cleared, allowing the pipeline to continue at the new instruction indicated by the branch.
=== Data hazards ===
There are several main solutions and algorithms used to resolve data hazards:
insert a pipeline bubble whenever a read after write (RAW) dependency is encountered, guaranteed to increase latency, or
use out-of-order execution to potentially prevent the need for pipeline bubbles
use operand forwarding to use data from later stages in the pipeline
In the case of out-of-order execution, the algorithm used can be:
scoreboarding, in which case a pipeline bubble is needed only when there is no functional unit available
the Tomasulo algorithm, which uses register renaming, allowing continual issuing of instructions
The task of removing data dependencies can be delegated to the compiler, which can fill in an appropriate number of NOP instructions between dependent instructions to ensure correct operation, or re-order instructions where possible.
==== Operand forwarding ====
==== Examples ====
In the following examples, computed values are in bold, while Register numbers are not.
For example, to write the value 3 to register 1, (which already contains a 6), and then add 7 to register 1 and store the result in register 2, i.e.:
i0: R1 = 6
i1: R1 = 3
i2: R2 = R1 + 7 = 10
Following execution, register 2 should contain the value 10. However, if i1 (write 3 to register 1) does not fully exit the pipeline before i2 starts executing, it means that R1 does not contain the value 3 when i2 performs its addition. In such an event, i2 adds 7 to the old value of register 1 (6), and so register 2 contains 13 instead, i.e.:
i0: R1 = 6
i2: R2 = R1 + 7 = 13
i1: R1 = 3
This error occurs because i2 reads Register 1 before i1 has committed/stored the result of its write operation to Register 1. So when i2 is reading the contents of Register 1, register 1 still contains 6, not 3.
Forwarding (described below) helps correct such errors by depending on the fact that the output of i1 (which is 3) can be used by subsequent instructions before the value 3 is committed to/stored in Register 1.
Forwarding applied to the example means that there is no wait to commit/store the output of i1 in Register 1 (in this example, the output is 3) before making that output available to the subsequent instruction (in this case, i2). The effect is that i2 uses the correct (the more recent) value of Register 1: the commit/store was made immediately and not pipelined.
With forwarding enabled, the Instruction Decode/Execution (ID/EX) stage of the pipeline now has two inputs: the value read from the register specified (in this example, the value 6 from Register 1), and the new value of Register 1 (in this example, this value is 3) which is sent from the next stage Instruction Execute/Memory Access (EX/MEM). Added control logic is used to determine which input to use.
=== Control hazards (branch hazards) ===
To avoid control hazards microarchitectures can:
insert a pipeline bubble (discussed above), guaranteed to increase latency, or
use branch prediction and essentially make educated guesses about which instructions to insert, in which case a pipeline bubble will only be needed in the case of an incorrect prediction
In the event that a branch causes a pipeline bubble after incorrect instructions have entered the pipeline, care must be taken to prevent any of the wrongly-loaded instructions from having any effect on the processor state excluding energy wasted processing them before they were discovered to be loaded incorrectly.
=== Other techniques ===
Memory latency is another factor that designers must attend to, because the delay could reduce performance. Different types of memory have different accessing time to the memory. Thus, by choosing a suitable type of memory, designers can improve the performance of the pipelined data path.
== See also ==
== References ==
=== General ===
== External links ==
"Automatic Pipelining from Transactional Datapath Specifications" (PDF). Retrieved 23 July 2014.
Tulsen, Dean (18 January 2005). "Pipeline hazards" (PDF). | Wikipedia/Control_hazard |
Layout designs (topographies) of integrated circuits are a field in the protection of intellectual property.
In United States intellectual property law, a "mask work" is a two or three-dimensional layout or topography of an integrated circuit (IC or "chip"), i.e. the arrangement on a chip of semiconductor devices such as transistors and passive electronic components such as resistors and interconnections. The layout is called a mask work because, in photolithographic processes, the multiple etched layers within actual ICs are each created using a mask, called the photomask, to permit or block the light at specific locations, sometimes for hundreds of chips on a wafer simultaneously.
Because of the functional nature of the mask geometry, the designs cannot be effectively protected under copyright law (except perhaps as decorative art). Similarly, because individual lithographic mask works are not clearly protectable subject matter; they also cannot be effectively protected under patent law, although any processes implemented in the work may be patentable. So since the 1990s, national governments have been granting copyright-like exclusive rights conferring time-limited exclusivity to reproduction of a particular layout. Terms of integrated circuit rights are usually shorter than copyrights applicable on pictures.
== International law ==
A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The Treaty, signed at Washington on May 26, 1989, is open to member states of the United Nations (UN) World Intellectual Property Organization (WIPO) and to intergovernmental organizations meeting certain criteria. The Treaty has been incorporated by reference into the TRIPS Agreement of the World Trade Organization (WTO), subject to the following modifications: the term of protection is at least 10 (rather than eight) years from the date of filing an application or of the first commercial exploitation in the world, but Members may provide a term of protection of 15 years from the creation of the layout-design; the exclusive right of the right-holder extends also to articles incorporating integrated circuits in which a protected layout-design is incorporated, in so far as it continues to contain an unlawfully reproduced layout-design; the circumstances in which layout-designs may be used without the consent of right-holders are more restricted; certain acts engaged in unknowingly will not constitute infringement.
The IPIC Treaty is currently not in force, but was partially integrated into the TRIPS agreement.
Article 35 of TRIPS in Relation to the IPIC Treaty states:
Members agree to provide protection to the layout-designs (topographies) of integrated circuits (referred to in this Agreement as "layout-designs") in accordance with Articles 2 through 7 (other than paragraph 3 of Article 6), Article 12 and paragraph 3 of Article 16 of the Treaty on Intellectual Property in Respect of Integrated Circuits and, in addition, to comply with the following provisions.
Article 2 of the IPIC Treaty gives the following definitions:
(i) 'integrated circuit' means a product, in its final form or an intermediate form, in which the elements, at least one of which is an active element, and some or all of the inter-connections are integrally formed in and/or on a piece of material and which is intended to perform an electronic function,
(ii) 'layout-design (topography)' means the three-dimensional disposition, however expressed, of the elements, at least one of which is an active element, and of some or all of the interconnections of an integrated circuit, or such a three-dimensional disposition prepared for an integrated circuit intended for manufacture ...
Under the IPIC Treaty, each Contracting Party is obliged to secure, throughout its territory, exclusive rights in layout-designs (topographies) of integrated circuits, whether or not the integrated circuit concerned is incorporated in an article. Such obligation applies to layout-designs that are original in the sense that they are the result of their creators' own intellectual effort and are not commonplace among creators of layout designs and manufacturers of integrated circuits at the time of their creation.
The Contracting Parties must, as a minimum, consider the following acts to be unlawful if performed without the authorization of the holder of the right: the reproduction of the lay-out design, and the importation, sale or other distribution for commercial purposes of the layout-design or an integrated circuit in which the layout-design is incorporated. However, certain acts may be freely performed for private purposes or for the sole purpose of evaluation, analysis, research or teaching.
== National laws ==
=== United States ===
The United States Code (USC) defines a mask work as "a series of related images, however fixed or encoded, having or representing the predetermined, three-dimensional pattern of metallic, insulating, or semiconductor material present or removed from the layers of a semiconductor chip product, and in which the relation of the images to one another is such that each image has the pattern of the surface of one form of the semiconductor chip product" [(17 U.S.C. § 901(a)(2))]. Mask work exclusive rights were first granted in the US by the Semiconductor Chip Protection Act of 1984.
According to 17 U.S.C. § 904, rights in semiconductor mask works last 10 years. This contrasts with a term of 95 years for modern copyrighted works with a corporate authorship; alleged infringement of mask work rights are also not protected by a statutory fair use defense, nor by the typical backup copy exemptions that 17 U.S.C. § 117 provides for computer software. Nevertheless, as fair use in copyrighted works was originally recognized by the judiciary over a century before being codified in the Copyright Act of 1976, it is possible that the courts might likewise find a similar defense applies to mask work.
The non-obligatory symbol used in a mask work protection notice is Ⓜ (M enclosed in a circle; Unicode code point U+24C2/U+1F1AD or HTML numeric character entity Ⓜ/🆭) or *M*.
The exclusive rights in a mask work are somewhat like those of copyright: the right to reproduce the mask work or (initially) distribute an IC made using the mask work. Like the first sale doctrine, a lawful owner of an authorized IC containing a mask work may freely import, distribute or use, but not reproduce the chip (or the mask). Mask work protection is characterized as a sui generis right, i.e., one created to protect specific rights where other (more general) laws were inadequate or inappropriate.
Note that the exclusive rights granted to mask work owners are more limited than those granted to copyright or patent holders. For instance, modification (derivative works) is not an exclusive right of mask work owners. Similarly, the exclusive right of a patentee to "use" an invention would not prohibit an independently created mask work of identical geometry. Furthermore, reproduction for reverse engineering of a mask work is specifically permitted by the law. As with copyright, mask work rights exist when they are created, regardless of registration, unlike patents, which only confer rights after application, examination and issuance.
Mask work rights have more in common with copyrights than with other exclusive rights such as patents or trademarks. On the other hand, they are used alongside copyright to protect a read-only memory (ROM) component that is encoded to contain computer software.
The publisher of software for a cartridge-based video game console may seek simultaneous protection of its property under several legal constructs:
A trademark registration on the game's title and possibly other marks such as fanciful names of worlds and characters used in the game (e.g., PAC-MAN®);
A copyright registration on the program as a literary work or on the audiovisual displays generated by the work; and
A mask work registration on the ROM that contains the binary.
Ordinary copyright law applies to the underlying software (source, binary) and original characters and art.
But the expiration date for the term of additional exclusive rights in a work distributed in the form of a mask ROM would depend on an as yet untested interpretation of the originality requirement of § 902(b):
(b) Protection under this chapter (i.e., as a mask work) shall not be available for a mask work that—
(1) is not original; or
(2) consists of designs that are staple, commonplace, or familiar in the semiconductor industry, or variations of such designs, combined in a way that, considered as a whole, is not original
(17 U.S.C. § 902, as of November 2010).
Under one interpretation, a mask work containing a given game title is either entirely unoriginal, as mask ROM in general is likely a familiar design, or a minor variation of the mask work for any of the first titles released for the console in the region.
=== Other countries ===
Protection of circuit layout design legislation exists across the globe:
Equivalent legislation exists in Australia, India and Hong Kong.
Australian law refers to mask works as "eligible layouts" or ELs.
In Canada these rights are protected under the [Integrated Circuit Topography Act (1990, c. 37)].
In the European Union, a sui generis design right protecting the design of materials was introduced by the Directive 87/54/EEC which is transposed in all member states.
India has the Semiconductor Integrated Circuits Layout Design Act, 2000 for the similar protection.
Japan relies on "The Act Concerning the Circuit Layout of a Semiconductor Integrated Circuit".
Brazil has enacted Law No. 11484, of 2007, to regulate the protection and registration of integrated circuit topography.
Switzerland has the Topographies Act of 1992
== See also ==
Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS)
Semiconductor intellectual property core
== References ==
== External links ==
Text of the Washington Treaty on IC protection Archived January 21, 2012, at the Wayback Machine | Wikipedia/Integrated_circuit_layout_design_protection |
A microcontroller (MC, uC, or μC) or microcontroller unit (MCU) is a small computer on a single integrated circuit. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Program memory in the form of NOR flash, OTP ROM, or ferroelectric RAM is also often included on the chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general-purpose applications consisting of various discrete chips.
In modern terminology, a microcontroller is similar to, but less sophisticated than, a system on a chip (SoC). A SoC may include a microcontroller as one of its components but usually integrates it with advanced peripherals like a graphics processing unit (GPU), a Wi-Fi module, or one or more coprocessors.
Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys, and other embedded systems. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make digital control of more devices and processes practical. Mixed-signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems. In the context of the Internet of Things, microcontrollers are an economical and popular means of data collection, sensing and actuating the physical world as edge devices.
Some microcontrollers may use four-bit words and operate at frequencies as low as 4 kHz for low power consumption (single-digit milliwatts or microwatts). They generally have the ability to retain functionality while waiting for an event such as a button press or other interrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. Other microcontrollers may serve performance-critical roles, where they may need to act more like a digital signal processor (DSP), with higher clock speeds and power consumption.
== History ==
=== Background ===
The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released on a single MOS LSI chip in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. It was followed by the 4-bit Intel 4040, the 8-bit Intel 8008, and the 8-bit Intel 8080. All of these processors required several external chips to implement a working system, including memory and peripheral interface chips. As a result, the total system cost was several hundred (1970s US) dollars, making it impossible to economically computerize small appliances.
MOS Technology introduced its sub-$100 microprocessors in 1975, the 6501 and 6502. Their chief aim was to reduce this cost barrier but these microprocessors still required external support, memory, and peripheral chips which kept the total system cost in the hundreds of dollars.
=== Development ===
One book credits TI engineers Gary Boone and Michael Cochran with the successful creation of the first microcontroller in 1971. The result of their work was the TMS 1000, which became commercially available in 1974. It combined read-only memory, read/write memory, processor and clock on one chip and was targeted at embedded systems.
During the early-to-mid-1970s, Japanese electronics manufacturers began producing microcontrollers for automobiles, including 4-bit MCUs for in-car entertainment, automatic wipers, electronic locks, and dashboard, and 8-bit MCUs for engine control.
Partly in response to the existence of the single-chip TMS 1000, Intel developed a computer system on a chip optimized for control applications, the Intel 8048, with commercial parts first shipping in 1977. It combined RAM and ROM on the same chip with a microprocessor. Among numerous applications, this chip would eventually find its way into over one billion PC keyboards. At that time Intel's President, Luke J. Valenter, stated that the microcontroller was one of the most successful products in the company's history, and he expanded the microcontroller division's budget by over 25%.
Most microcontrollers at this time had concurrent variants. One had EPROM program memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure to ultraviolet light. These erasable chips were often used for prototyping. The other variant was either a mask-programmed ROM or a PROM variant which was only programmable once. For the latter, sometimes the designation OTP was used, standing for "one-time programmable". In an OTP microcontroller, the PROM was usually of identical type as the EPROM, but the chip package had no quartz window; because there was no way to expose the EPROM to ultraviolet light, it could not be erased. Because the erasable versions required ceramic packages with quartz windows, they were significantly more expensive than the OTP versions, which could be made in lower-cost opaque plastic packages. For the erasable variants, quartz was required, instead of less expensive glass, for its transparency to ultraviolet light—to which glass is largely opaque—but the main cost differentiator was the ceramic package itself. Piggyback microcontrollers were also used.
In 1993, the introduction of EEPROM memory allowed microcontrollers (beginning with the Microchip PIC16C84) to be electrically erased quickly without an expensive package as required for EPROM, allowing both rapid prototyping, and in-system programming. (EEPROM technology had been available prior to this time, but the earlier EEPROM was more expensive and less durable, making it unsuitable for low-cost mass-produced microcontrollers.) The same year, Atmel introduced the first microcontroller using Flash memory, a special type of EEPROM. Other companies rapidly followed suit, with both memory types.
Nowadays microcontrollers are cheap and readily available for hobbyists, with large online communities around certain processors.
=== Volume and cost ===
In 2002, about 55% of all CPUs sold in the world were 8-bit microcontrollers and microprocessors.
Over two billion 8-bit microcontrollers were sold in 1997, and according to Semico, over four billion 8-bit microcontrollers were sold in 2006. More recently, Semico has claimed the MCU market grew 36.5% in 2010 and 12% in 2011.
A typical home in a developed country is likely to have only four general-purpose microprocessors but around three dozen microcontrollers. A typical mid-range automobile has about 30 microcontrollers. They can also be found in many electrical devices such as washing machines, microwave ovens, and telephones.
Historically, the 8-bit segment has dominated the MCU market [..] 16-bit microcontrollers became the largest volume MCU category in 2011, overtaking 8-bit devices for the first time that year [..] IC Insights believes the makeup of the MCU market will undergo substantial changes in the next five years with 32-bit devices steadily grabbing a greater share of sales and unit volumes. By 2017, 32-bit MCUs are expected to account for 55% of microcontroller sales [..] In terms of unit volumes, 32-bit MCUs are expected account for 38% of microcontroller shipments in 2017, while 16-bit devices will represent 34% of the total, and 4-/8-bit designs are forecast to be 28% of units sold that year.
The 32-bit MCU market is expected to grow rapidly due to increasing demand for higher levels of precision in embedded-processing systems and the growth in connectivity using the Internet. [..] In the next few years, complex 32-bit MCUs are expected to account for over 25% of the processing power in vehicles.
Cost to manufacture can be under US$0.10 per unit.
Cost has plummeted over time, with the cheapest 8-bit microcontrollers being available for under US$0.03 in 2018, and some 32-bit microcontrollers around US$1 for similar quantities.
In 2012, following a global crisis—a worst ever annual sales decline and recovery and average sales price year-over-year plunging 17%—the biggest reduction since the 1980s—the average price for a microcontroller was US$0.88 (US$0.69 for 4-/8-bit, US$0.59 for 16-bit, US$1.76 for 32-bit).
In 2012, worldwide sales of 8-bit microcontrollers were around US$4 billion, while 4-bit microcontrollers also saw significant sales.
In 2015, 8-bit microcontrollers could be bought for US$0.311 (1,000 units), 16-bit for US$0.385 (1,000 units), and 32-bit for US$0.378 (1,000 units, but at US$0.35 for 5,000).
In 2018, 8-bit microcontrollers could be bought for US$0.03, 16-bit for US$0.393 (1,000 units, but at US$0.563 for 100 or US$0.349 for full reel of 2,000), and 32-bit for US$0.503 (1,000 units, but at US$0.466 for 5,000).
In 2018, the low-priced microcontrollers above from 2015 were all more expensive (with inflation calculated between 2018 and 2015 prices for those specific units) at: the 8-bit microcontroller could be bought for US$0.319 (1,000 units) or 2.6% higher, the 16-bit one for US$0.464 (1,000 units) or 21% higher, and the 32-bit one for US$0.503 (1,000 units, but at US$0.466 for 5,000) or 33% higher.
=== Smallest computer ===
On 21 June 2018, the "world's smallest computer" was announced by the University of Michigan. The device is a "0.04 mm3 16 nW wireless and batteryless sensor system with integrated Cortex-M0+ processor and optical communication for cellular temperature measurement." It "measures just 0.3 mm to a side—dwarfed by a grain of rice. [...] In addition to the RAM and photovoltaics, the new computing devices have processors and wireless transmitters and receivers. Because they are too small to have conventional radio antennae, they receive and transmit data with visible light. A base station provides light for power and programming, and it receives the data." The device is 1⁄10th the size of IBM's previously claimed world-record-sized computer from months back in March 2018, which is "smaller than a grain of salt", has a million transistors, costs less than $0.10 to manufacture, and, combined with blockchain technology, is intended for logistics and "crypto-anchors"—digital fingerprint applications.
== Embedded design ==
A microcontroller can be considered a self-contained system with a processor, memory and peripherals and can be used as an embedded system. The majority of microcontrollers in use today are embedded in other machinery, such as automobiles, telephones, appliances, and peripherals for computer systems.
While some embedded systems are very sophisticated, many have minimal requirements for memory and program length, with no operating system, and low software complexity. Typical input and output devices include switches, relays, solenoids, LED's, small or custom liquid-crystal displays, radio frequency devices, and sensors for data such as temperature, humidity, light level etc. Embedded systems usually have no keyboard, screen, disks, printers, or other recognizable I/O devices of a personal computer, and may lack human interaction devices of any kind.
=== Interrupts ===
Microcontrollers must provide real-time (predictable, though not necessarily fast) response to events in the embedded system they are controlling. When certain events occur, an interrupt system can signal the processor to suspend processing the current instruction sequence and to begin an interrupt service routine (ISR, or "interrupt handler") which will perform any processing required based on the source of the interrupt, before returning to the original instruction sequence. Possible interrupt sources are device-dependent and often include events such as an internal timer overflow, completing an analog-to-digital conversion, a logic-level change on an input such as from a button being pressed, and data received on a communication link. Where power consumption is important as in battery devices, interrupts may also wake a microcontroller from a low-power sleep state where the processor is halted until required to do something by a peripheral event.
=== Programs ===
Typically microcontroller programs must fit in the available on-chip memory, since it would be costly to provide a system with external, expandable memory. Compilers and assemblers are used to convert both high-level and assembly language code into a compact machine code for storage in the microcontroller's memory. Depending on the device, the program memory may be permanent, read-only memory that can only be programmed at the factory, or it may be field-alterable flash or erasable read-only memory.
Manufacturers have often produced special versions of their microcontrollers in order to help the hardware and software development of the target system. Originally these included EPROM versions that have a "window" on the top of the device through which program memory can be erased by ultraviolet light, ready for reprogramming after a programming ("burn") and test cycle. Since 1998, EPROM versions are rare and have been replaced by EEPROM and flash, which are easier to use (can be erased electronically) and cheaper to manufacture.
Other versions may be available where the ROM is accessed as an external device rather than as internal memory, however these are becoming rare due to the widespread availability of cheap microcontroller programmers.
The use of field-programmable devices on a microcontroller may allow field update of the firmware or permit late factory revisions to products that have been assembled but not yet shipped. Programmable memory also reduces the lead time required for deployment of a new product.
Where hundreds of thousands of identical devices are required, using parts programmed at the time of manufacture can be economical. These "mask-programmed" parts have the program laid down in the same way as the logic of the chip, at the same time.
A customized microcontroller incorporates a block of digital logic that can be personalized for additional processing capability, peripherals and interfaces that are adapted to the requirements of the application. One example is the AT91CAP from Atmel.
=== Other microcontroller features ===
Microcontrollers usually contain from several to dozens of general purpose input/output pins (GPIO). GPIO pins are software configurable to either an input or an output state. When GPIO pins are configured to an input state, they are often used to read sensors or external signals. Configured to the output state, GPIO pins can drive external devices such as LEDs or motors, often indirectly, through external power electronics.
Many embedded systems need to read sensors that produce analog signals. However, because they are built to interpret and process digital data, i.e. 1s and 0s, they are not able to do anything with the analog signals that may be sent to it by a device. So, an analog-to-digital converter (ADC) is used to convert the incoming data into a form that the processor can recognize. A less common feature on some microcontrollers is a digital-to-analog converter (DAC) that allows the processor to output analog signals or voltage levels.
In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the most common types of timers is the programmable interval timer (PIT). A PIT may either count down from some value to zero, or up to the capacity of the count register, overflowing to zero. Once it reaches zero, it sends an interrupt to the processor indicating that it has finished counting. This is useful for devices such as thermostats, which periodically test the temperature around them to see if they need to turn the air conditioner on/off, the heater on/off, etc.
A dedicated pulse-width modulation (PWM) block makes it possible for the CPU to control power converters, resistive loads, motors, etc., without using many CPU resources in tight timer loops.
A universal asynchronous receiver/transmitter (UART) block makes it possible to receive and transmit data over a serial line with very little load on the CPU. Dedicated on-chip hardware also often includes capabilities to communicate with other devices (chips) in digital formats such as Inter-Integrated Circuit (I²C), Serial Peripheral Interface (SPI), Universal Serial Bus (USB), and Ethernet.
== Higher integration ==
Microcontrollers may not implement an external address or data bus as they integrate RAM and non-volatile memory on the same chip as the CPU. Using fewer pins, the chip can be placed in a much smaller, cheaper package.
Integrating the memory and other peripherals on a single chip and testing them as a unit increases the cost of that chip, but often results in decreased net cost of the embedded system as a whole. Even if the cost of a CPU that has integrated peripherals is slightly more than the cost of a CPU and external peripherals, having fewer chips typically allows a smaller and cheaper circuit board, and reduces the labor required to assemble and test the circuit board, in addition to tending to decrease the defect rate for the finished assembly.
A microcontroller is a single integrated circuit, commonly with the following features:
central processing unit – ranging from small and simple 4-bit processors to complex 32-bit or 64-bit processors
volatile memory (RAM) for data storage
ROM, EPROM, EEPROM or Flash memory for program and operating parameter storage
discrete input and output bits, allowing control or detection of the logic state of an individual package pin
serial input/output such as serial ports (UARTs)
other serial communications interfaces like I²C, Serial Peripheral Interface and Controller Area Network for system interconnect
peripherals such as timers, event counters, PWM generators, and watchdog
clock generator – often an oscillator for a quartz timing crystal, resonator or RC circuit
many include analog-to-digital converters, some include digital-to-analog converters
in-circuit programming and in-circuit debugging support
This integration drastically reduces the number of chips and the amount of wiring and circuit board space that would be needed to produce equivalent systems using separate chips. Furthermore, on low pin count devices in particular, each pin may interface to several internal peripherals, with the pin function selected by software. This allows a part to be used in a wider variety of applications than if pins had dedicated functions.
Microcontrollers have proved to be highly popular in embedded systems since their introduction in the 1970s.
Some microcontrollers use a Harvard architecture: separate memory buses for instructions and data, allowing accesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may be a different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bit data registers.
The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operating frequencies and system design flexibility against time-to-market requirements from their customers and overall lower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality.
Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose-built for control applications. A microcontroller instruction set usually has many instructions intended for bit manipulation (bit-wise operations) to make control programs more compact. For example, a general-purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a microcontroller could have a single instruction to provide that commonly required function.
Microcontrollers historically have not had math coprocessors, so floating-point arithmetic has been performed by software. However, some recent designs do include FPUs and DSP-optimized features. An example would be Microchip's PIC32 MIPS-based line.
== Programming environments ==
Microcontrollers were originally programmed only in assembly language, but various high-level programming languages, such as C, Python and JavaScript, are now also in common use to target microcontrollers and embedded systems. Compilers for general-purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.
Microcontrollers with specialty hardware may require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters may also contain nonstandard features, such as MicroPython, although a fork, CircuitPython, has looked to move hardware dependencies to libraries and have the language adhere to a more CPython standard.
Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontroller Intel 8052; BASIC and FORTH on the Zilog Z8 as well as some modern devices. Typically these interpreters support interactive programming.
Simulators are available for some microcontrollers. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems.
Recent microcontrollers are often integrated with on-chip debug circuitry that when accessed by an in-circuit emulator (ICE) via JTAG, allow debugging of the firmware with a debugger. A real-time ICE may allow viewing and/or manipulating of internal states while running. A tracing ICE can record executed program and MCU states before/after a trigger point.
== Types ==
As of 2008, there are several dozen microcontroller architectures and vendors including:
ARM core processors (many vendors)
ARM Cortex-M cores are specifically targeted toward microcontroller applications
Microchip Technology Atmel AVR (8-bit), AVR32 (32-bit), and AT91SAM (32-bit)
Cypress Semiconductor's M8C core used in their Cypress PSoC
Freescale ColdFire (32-bit) and S08 (8-bit)
Freescale 68HC11 (8-bit), and others based on the Motorola 6800 family
Intel 8051, also manufactured by NXP Semiconductors, Infineon and many others
Infineon: 8-bit XC800, 16-bit XE166, 32-bit XMC4000 (ARM based Cortex M4F), 32-bit TriCore and, 32-bit Aurix Tricore Bit microcontrollers
Maxim Integrated MAX32600, MAX32620, MAX32625, MAX32630, MAX32650, MAX32640
MIPS
Microchip Technology PIC, (8-bit PIC16, PIC18, 16-bit dsPIC33 / PIC24), (32-bit PIC32)
NXP Semiconductors LPC1000, LPC2000, LPC3000, LPC4000 (32-bit), LPC900, LPC700 (8-bit)
Parallax Propeller
PowerPC ISE
Rabbit 2000 (8-bit)
Renesas Electronics: RL78 16-bit MCU; RX 32-bit MCU; SuperH; V850 32-bit MCU; H8; R8C 16-bit MCU
Silicon Laboratories Pipelined 8-bit 8051 microcontrollers and mixed-signal ARM-based 32-bit microcontrollers
STMicroelectronics STM8 (8-bit), ST10 (16-bit), STM32 (32-bit), SPC5 (automotive 32-bit)
Texas Instruments TI MSP430 (16-bit), MSP432 (32-bit), C2000 (32-bit)
Toshiba TLCS-870 (8-bit/16-bit)
Many others exist, some of which are used in very narrow range of applications or are more like applications processors than microcontrollers. The microcontroller market is extremely fragmented, with numerous vendors, technologies, and markets. Note that many vendors sell or have sold multiple architectures.
== Interrupt latency ==
In contrast to general-purpose computers, microcontrollers used in embedded systems often seek to optimize interrupt latency over instruction throughput. Issues include both reducing the latency, and making it be more predictable (to support real-time control).
When an electronic device causes an interrupt, during the context switch the intermediate results (registers) have to be saved before the software responsible for handling the interrupt can run. They must also be restored after that interrupt handler is finished. If there are more processor registers, this saving and restoring process may take more time, increasing the latency. (If an ISR does not require the use of some registers, it may simply leave them alone rather than saving and restoring them, so in that case those registers are not involved with the latency.) Ways to reduce such context/restore latency include having relatively few registers in their central processing units (undesirable because it slows down most non-interrupt processing substantially), or at least having the hardware not save them all (this fails if the software then needs to compensate by saving the rest "manually"). Another technique involves spending silicon gates on "shadow registers": One or more duplicate registers used only by the interrupt software, perhaps supporting a dedicated stack.
Other factors affecting interrupt latency include:
Cycles needed to complete current CPU activities. To minimize those costs, microcontrollers tend to have short pipelines (often three instructions or less), small write buffers, and ensure that longer instructions are continuable or restartable. RISC design principles ensure that most instructions take the same number of cycles, helping avoid the need for most such continuation/restart logic.
The length of any critical section that needs to be interrupted. Entry to a critical section restricts concurrent data structure access. When a data structure must be accessed by an interrupt handler, the critical section must block that interrupt. Accordingly, interrupt latency is increased by however long that interrupt is blocked. When there are hard external constraints on system latency, developers often need tools to measure interrupt latencies and track down which critical sections cause slowdowns.
One common technique just blocks all interrupts for the duration of the critical section. This is easy to implement, but sometimes critical sections get uncomfortably long.
A more complex technique just blocks the interrupts that may trigger access to that data structure. This is often based on interrupt priorities, which tend to not correspond well to the relevant system data structures. Accordingly, this technique is used mostly in very constrained environments.
Processors may have hardware support for some critical sections. Examples include supporting atomic access to bits or bytes within a word, or other atomic access primitives like the LDREX/STREX exclusive access primitives introduced in the ARMv6 architecture.
Interrupt nesting. Some microcontrollers allow higher priority interrupts to interrupt lower priority ones. This allows software to manage latency by giving time-critical interrupts higher priority (and thus lower and more predictable latency) than less-critical ones.
Trigger rate. When interrupts occur back-to-back, microcontrollers may avoid an extra context save/restore cycle by a form of tail call optimization.
Lower end microcontrollers tend to support fewer interrupt latency controls than higher end ones.
== Memory technology ==
Two different kinds of memory are commonly used with microcontrollers, a non-volatile memory for storing firmware and a read–write memory for temporary data.
=== Data ===
From the earliest microcontrollers to today, six-transistor SRAM is almost always used as the read/write working memory, with a few more transistors per bit used in the register file.
In addition to the SRAM, some microcontrollers also have internal EEPROM and/or NVRAM for data storage; and ones that do not have any (such as the BASIC Stamp), or where the internal memory is insufficient, are often connected to an external EEPROM or flash memory chip.
A few microcontrollers beginning in 2003 have "self-programmable" flash memory.
=== Firmware ===
The earliest microcontrollers used mask ROM to store firmware. Later microcontrollers (such as the early versions of the Freescale 68HC11 and early PIC microcontrollers) had EPROM memory, which used a translucent window to allow erasure via UV light, while production versions had no such window, being OTP (one-time-programmable). Firmware updates were equivalent to replacing the microcontroller itself, thus many products were not upgradeable.
Motorola MC68HC805 was the first microcontroller to use EEPROM to store the firmware. EEPROM microcontrollers became more popular in 1993 when Microchip introduced PIC16C84 and Atmel introduced an 8051-core microcontroller that was first one to use NOR Flash memory to store the firmware. Today's microcontrollers almost all use flash memory, with a few models using FRAM and some ultra-low-cost parts still using OTP or Mask ROM.
== See also ==
Microprocessor
System on a chip
List of common microcontrollers
List of Wi-Fi microcontrollers
List of open-source hardware projects
Microbotics
Programmable logic controller
Single-board microcontroller
== References ==
== External links == | Wikipedia/Microcontrollers |
Birth control, also known as contraception, anticonception, and fertility control, is the use of methods or devices to prevent pregnancy. Birth control has been used since ancient times, but effective and safe methods of birth control only became available in the 20th century. Planning, making available, and using human birth control is called family planning. Some cultures limit or discourage access to birth control because they consider it to be morally, religiously, or politically undesirable.
The World Health Organization and United States Centers for Disease Control and Prevention provide guidance on the safety of birth control methods among women with specific medical conditions. The most effective methods of birth control are sterilization by means of vasectomy in males and tubal ligation in females, intrauterine devices (IUDs), and implantable birth control. This is followed by a number of hormone-based methods including contraceptive pills, patches, vaginal rings, and injections. Less effective methods include physical barriers such as condoms, diaphragms and birth control sponges and fertility awareness methods. The least effective methods are spermicides and withdrawal by the male before ejaculation. Sterilization, while highly effective, is not usually reversible; all other methods are reversible, most immediately upon stopping them. Safe sex practices, such as with the use of condoms or female condoms, can also help prevent sexually transmitted infections. Other birth control methods do not protect against sexually transmitted infections. Emergency birth control can prevent pregnancy if taken within 72 to 120 hours after unprotected sex. Some argue not having sex is also a form of birth control, but abstinence-only sex education may increase teenage pregnancies if offered without birth control education, due to non-compliance.
In teenagers, pregnancies are at greater risk of poor outcomes. Comprehensive sex education and access to birth control decreases the rate of unintended pregnancies in this age group. While all forms of birth control can generally be used by young people, long-acting reversible birth control such as implants, IUDs, or vaginal rings are more successful in reducing rates of teenage pregnancy. After the delivery of a child, a woman who is not exclusively breastfeeding may become pregnant again after as few as four to six weeks. Some methods of birth control can be started immediately following the birth, while others require a delay of up to six months. In women who are breastfeeding, progestin-only methods are preferred over combined oral birth control pills. In women who have reached menopause, it is recommended that birth control be continued for one year after the last menstrual period.
About 222 million women who want to avoid pregnancy in developing countries are not using a modern birth control method. Birth control use in developing countries has decreased the number of deaths during or around the time of pregnancy by 40% (about 270,000 deaths prevented in 2008) and could prevent 70% if the full demand for birth control were met. By lengthening the time between pregnancies, birth control can improve adult women's delivery outcomes and the survival of their children. In the developing world, women's earnings, assets, and weight, as well as their children's schooling and health, all improve with greater access to birth control. Birth control increases economic growth because of fewer dependent children, more women participating in the workforce, and/or less use of scarce resources.
== Methods ==
Birth control methods include barrier methods, hormonal birth control, intrauterine devices (IUDs), sterilization, and behavioral methods. They are used before or during sex while emergency contraceptives are effective for up to five days after sex. Effectiveness is generally expressed as the percentage of women who become pregnant using a given method during the first year, and sometimes as a lifetime failure rate among methods with high effectiveness, such as tubal ligation.
Birth control methods fall into two main categories: male contraception and female contraception. Common male contraceptives are withdrawal, condoms, and vasectomy. Female contraception is more developed compared to male contraception, these include contraceptive pills (combination and progestin-only pill), hormonal or non-hormonal IUD, patch, vaginal ring, diaphragm, shot, implant, fertility awareness, and tubal ligation.
The most effective methods are long-acting and do not require ongoing health care visits. Surgical sterilization, implantable hormones, and intrauterine devices all have first-year failure rates of less than 1%. Hormonal contraceptive pills, patches or vaginal rings, and the lactational amenorrhea method (LAM), if adhered to strictly, can also have first-year (or for LAM, first-6-month) failure rates of less than 1%. With typical use, first-year failure rates are considerably higher, at 9%, due to inconsistent use. Other methods such as condoms, diaphragms, and spermicides have higher first-year failure rates even with perfect usage. The American Academy of Pediatrics recommends long acting reversible birth control as first line for young individuals.
While all methods of birth control have some potential adverse effects, the risk is less than that of pregnancy. After stopping or removing many methods of birth control, including oral contraceptives, IUDs, implants and injections, the rate of pregnancy during the subsequent year is the same as for those who used no birth control.
For individuals with specific health problems, certain forms of birth control may require further investigations. For women who are otherwise healthy, many methods of birth control should not require a medical exam—including birth control pills, injectable or implantable birth control, and condoms. For example, a pelvic exam, breast exam, or blood test before starting birth control pills does not appear to affect outcomes. In 2009, the World Health Organization (WHO) published a detailed list of medical eligibility criteria for each type of birth control.
=== Hormonal ===
Hormonal contraception is available in a number of different forms, including oral pills, implants under the skin, injections, patches, IUDs and a vaginal ring. They are currently available only for women, although hormonal contraceptives for men have been and are being clinically tested. There are two types of oral birth control pills, the combined oral contraceptive pills (which contain both estrogen and a progestin) and the progestogen-only pills (sometimes called minipills). If either is taken during pregnancy, they do not increase the risk of miscarriage nor cause birth defects. Both types of birth control pills prevent fertilization mainly by inhibiting ovulation and thickening cervical mucus. They may also change the lining of the uterus and thus decrease implantation. Their effectiveness depends on the user's adherence to taking the pills.
Combined hormonal contraceptives are associated with a slightly increased risk of venous and arterial blood clots. Venous clots, on average, increase from 2.8 to 9.8 per 10,000 women years which is still less than that associated with pregnancy. Due to this risk, they are not recommended in women over 35 years of age who continue to smoke. Due to the increased risk, they are included in decision tools such as the DASH score and PERC rule used to predict the risk of blood clots.
The effect on sexual drive is varied, with an increase or decrease in some but with no effect in most. Combined oral contraceptives reduce the risk of ovarian cancer and endometrial cancer and do not change the risk of breast cancer. They often reduce menstrual bleeding and painful menstruation cramps. The lower doses of estrogen released from the vaginal ring may reduce the risk of breast tenderness, nausea, and headache associated with higher dose estrogen products.
Progestin-only pills, injections, and intrauterine devices are not associated with an increased risk of blood clots and may be used by women with a history of blood clots in their veins. In those with a history of arterial blood clots, non-hormonal birth control or a progestin-only method other than the injectable version should be used. Progestin-only pills may improve menstrual symptoms and can be used by breastfeeding women as they do not affect milk production. Irregular bleeding may occur with progestin-only methods, with some users reporting no periods. The progestins drospirenone and desogestrel minimize the androgenic side effects but increase the risks of blood clots and are thus not the first line. The perfect use first-year failure rate of injectable progestin is 0.2%; the typical use first failure rate is 6%.
=== Barrier ===
Barrier contraceptives are devices that attempt to prevent pregnancy by physically preventing sperm from entering the uterus. They include male condoms, female condoms, cervical caps, diaphragms, and contraceptive sponges with spermicide.
Globally, condoms are the most common method of birth control. Male condoms are put on a man's erect penis and physically block ejaculated sperm from entering the body of a sexual partner. Modern condoms are most often made from latex, but some are made from other materials such as polyurethane, or lamb's intestine. Female condoms are also available, most often made of nitrile, latex or polyurethane. Male condoms have the advantage of being inexpensive, easy to use, and have few adverse effects. Making condoms available to teenagers does not appear to affect the age of onset of sexual activity or its frequency. In Japan, about 80% of couples who are using birth control use condoms, while in Germany this number is about 25%, and in the United States it is 18%.
Male condoms and the diaphragm with spermicide have typical use first-year failure rates of 18% and 12%, respectively. With perfect use condoms are more effective with a 2% first-year failure rate versus a 6% first-year rate with the diaphragm. Condoms have the additional benefit of helping to prevent the spread of some sexually transmitted infections such as HIV/AIDS, however, condoms made from animal intestines do not.
Contraceptive sponges combine a barrier with a spermicide. Like diaphragms, they are inserted vaginally before intercourse and must be placed over the cervix to be effective. Typical failure rates during the first year depend on whether or not a woman has previously given birth, being 24% in those who have and 12% in those who have not. The sponge can be inserted up to 24 hours before intercourse and must be left in place for at least six hours afterward. Allergic reactions and more severe adverse effects such as toxic shock syndrome have been reported.
=== Intrauterine devices ===
The current intrauterine devices (IUD) are small devices, often T-shaped, containing either copper or levonorgestrel, which are inserted into the uterus. They are one form of long-acting reversible contraception which is the most effective type of reversible birth control. Failure rates with the copper IUD is about 0.8% while the levonorgestrel IUD has a failure rates of 0.2% in the first year of use. Among types of birth control, they, along with birth control implants, result in the greatest satisfaction among users. As of 2007, IUDs are the most widely used form of reversible contraception, with more than 180 million users worldwide.
Evidence supports effectiveness and safety in adolescents and those who have and have not previously had children. IUDs do not affect breastfeeding and can be inserted immediately after delivery. They may also be used immediately after an abortion. Once removed, even after long term use, fertility returns to normal immediately.
While copper IUDs may increase menstrual bleeding and result in more painful cramps, hormonal IUDs may reduce menstrual bleeding or stop menstruation altogether. Cramping can be treated with painkillers like non-steroidal anti-inflammatory drugs. Other potential complications include expulsion (2–5%) and rarely perforation of the uterus (less than 0.7%). A previous model of the intrauterine device (the Dalkon shield) was associated with an increased risk of pelvic inflammatory disease; however, the risk is not affected with current models in those without sexually transmitted infections around the time of insertion. IUDs appear to decrease the risk of ovarian cancer.
=== Sterilization ===
Two broad categories exist, surgical and non-surgical.
Surgical sterilization is available in the form of tubal ligation for women and vasectomy for men. Tubal ligation decreases the risk of ovarian cancer. Short term complications are twenty times less likely from a vasectomy than a tubal ligation. After a vasectomy, there may be swelling and pain of the scrotum which usually resolves in one or two weeks. Chronic scrotal pain associated with negative impact on quality of life occurs after vasectomy in about 1–2% of men. With tubal ligation, complications occur in 1 to 2 percent of procedures with serious complications usually due to the anesthesia. Neither method offers protection from sexually transmitted infections. Sometimes, salpingectomy is also used for sterilization in women.
Non-surgical sterilization methods have also been explored.
Fahim et al. found that heat exposure, especially high-intensity ultrasound, was effective either for temporary or permanent contraception depending on the dose, e.g. selective destruction of germ cells and Sertoli cells without affecting Leydig cells or testosterone levels. Chemical, e.g. drug-based methods are also available, e.g. orally-administered Lonidamine for temporary, or permanent (depending on the dose) fertility management.
Boris provides a method for chemically inducing either temporary or non-reversible sterility, depending on the dose, "Permanent sterility in human males can be obtained by a single oral dosage containing from about 18 mg/kg to about 25 mg/kg".
The permanence of this decision may cause regret in some men and women. Of women who have undergone tubal ligation after the age of 30, about 6% regret their decision, as compared with 20–24% of women who received sterilization within one year of delivery and before turning 30, and 6% in nulliparous women sterilized before the age of 30. By contrast, less than 5% of men are likely to regret sterilization. Men who are more likely to regret sterilization are younger, have young or no children, or have an unstable marriage. In a survey of biological parents, 9% stated they would not have had children if they were able to do it over again.
Although sterilization is considered a permanent procedure, it is possible to attempt a tubal reversal to reconnect the fallopian tubes or a vasectomy reversal to reconnect the vasa deferentia. In women, the desire for a reversal is often associated with a change in spouse. Pregnancy success rates after tubal reversal are between 31 and 88 percent, with complications including an increased risk of ectopic pregnancy. The number of males who request reversal is between 2 and 6 percent. Rates of success in fathering another child after reversal are between 38 and 84 percent; with success being lower the longer the period between the vasectomy and the reversal. Sperm extraction followed by in vitro fertilization may also be an option in men.
=== Behavioral ===
Behavioral methods involve regulating the timing or method of intercourse to prevent the introduction of sperm into the female reproductive tract, either altogether or when an egg may be present. If used perfectly the first-year failure rate may be around 3.4%; however, if used poorly first-year failure rates may approach 85%.
==== Fertility awareness ====
Fertility awareness methods involve determining the most fertile days of the menstrual cycle and avoiding unprotected intercourse. Techniques for determining fertility include monitoring basal body temperature, cervical secretions, or the day of the cycle. They have typical first-year failure rates of 24%; perfect use first-year failure rates depend on which method is used and range from 0.4% to 5%. The evidence on which these estimates are based, however, is poor as the majority of people in trials stop their use early. Globally, they are used by about 3.6% of couples. If based on basal body temperature and another primary sign, the method is called symptothermal. First-year failure rates of 20% overall and 0.4% for perfect use have been reported in clinical studies of the symptothermal method. Many fertility tracking apps are available, as of 2016, but they are more commonly designed to assist those trying to get pregnant rather than prevent pregnancy.
==== Withdrawal ====
The withdrawal method (also known as coitus interruptus) is the practice of ending intercourse ("pulling out") before ejaculation. The main risk of the withdrawal method is that the man may not perform the maneuver correctly or on time. First-year failure rates vary from 4% with perfect usage to 22% with typical usage. It is not considered birth control by some medical professionals.
There is little data regarding the sperm content of pre-ejaculatory fluid. While some tentative research did not find sperm, one trial found sperm present in 10 out of 27 volunteers. The withdrawal method is used as birth control by about 3% of couples.
==== Abstinence ====
Sexual abstinence may be used as a form of birth control, meaning either not engaging in any type of sexual activity, or specifically not engaging in vaginal intercourse, while engaging in other forms of non-vaginal sex. Complete sexual abstinence is 100% effective in preventing pregnancy. However, among those who take a pledge to abstain from premarital sex, as many as 88% who engage in sex, do so prior to marriage. The choice to abstain from sex cannot protect against pregnancy as a result of rape, and public health efforts emphasizing abstinence to reduce unwanted pregnancy may have limited effectiveness, especially in developing countries and among disadvantaged groups.
Deliberate non-penetrative sex without vaginal sex or deliberate oral sex without vaginal sex are also sometimes considered birth control. While this generally avoids pregnancy, pregnancy can still occur with intercrural sex and other forms of penis-near-vagina sex (genital rubbing, and the penis exiting from anal intercourse) where sperm can be deposited near the entrance to the vagina and can travel along the vagina's lubricating fluids.
Abstinence-only sex education does not reduce teenage pregnancy. Teen pregnancy rates and STI rates are generally the same or higher in states where students are given abstinence-only education, as compared with comprehensive sex education. Some authorities recommend that those using abstinence as a primary method have backup methods available (such as condoms or emergency contraceptive pills).
==== Lactation ====
The lactational amenorrhea method involves the use of a woman's natural postpartum infertility which occurs after delivery and may be extended by breastfeeding. For a postpartum woman to be infertile (protected from pregnancy), their periods have usually not yet returned (not menstruating), they are exclusively breastfeeding the infant, and the baby is younger than six months. If breastfeeding is the infant's only source of nutrition and the baby is less than 6 months old, 93–99% of women are estimated to have protection from becoming pregnant in the first six months (0.75–7.5% failure rate). The failure rate increases to 4–7% at one year and 13% at two years. Feeding formula, pumping instead of nursing, the use of a pacifier, and feeding solids all increase the chances of becoming pregnant while breastfeeding. In those who are exclusively breastfeeding, about 10% begin having periods before three months and 20% before six months. In those who are not breastfeeding, fertility may return as early as four weeks after delivery.
=== Emergency ===
Emergency contraceptive methods are medications (sometimes misleadingly referred to as "morning-after pills") or devices used after unprotected sexual intercourse with the hope of preventing pregnancy. Emergency contraceptives are often given to victims of rape. They work primarily by preventing ovulation or fertilization. They are unlikely to affect implantation, but this has not been completely excluded. Several options exist, including high dose birth control pills, levonorgestrel, mifepristone, ulipristal and IUDs. All methods have minimal side effects. Providing emergency contraceptive pills to women in advance of sexual activity does not affect rates of sexually transmitted infections, condom use, pregnancy rates, or sexual risk-taking behavior. In a UK study, when a three-month "bridge" supply of the progestogen-only pill was provided by a pharmacist along with emergency contraception after sexual activity, this intervention was shown to increase the likelihood that the person would begin to use an effective method of long-term contraception.
Levonorgestrel pills, when used within 3 days, decrease the chance of pregnancy after a single episode of unprotected sex or condom failure by 70% (resulting in a pregnancy rate of 2.2%). Ulipristal, when used within 5 days, decreases the chance of pregnancy by about 85% (pregnancy rate 1.4%) and is more effective than levonorgestrel. Mifepristone is also more effective than levonorgestrel, while copper IUDs are the most effective method. IUDs can be inserted up to five days after intercourse and prevent about 99% of pregnancies after an episode of unprotected sex (pregnancy rate of 0.1 to 0.2%). This makes them the most effective form of emergency contraceptive. In those who are overweight or obese, levonorgestrel is less effective and an IUD or ulipristal is recommended.
=== Dual protection ===
Dual protection is the use of methods that prevent both sexually transmitted infections and pregnancy. This can be with condoms either alone or along with another birth control method or by the avoidance of penetrative sex.
If pregnancy is a high concern, using two methods at the same time is reasonable. For example, two forms of birth control are recommended in those taking the anti-acne drug isotretinoin or anti-epileptic drugs like carbamazepine, due to the high risk of birth defects if taken during pregnancy.
== Effects ==
=== Health ===
Contraceptive use in developing countries is estimated to have decreased the number of maternal deaths by 40% (about 270,000 deaths prevented in 2008) and could prevent 70% of deaths if the full demand for birth control were met. These benefits are achieved by reducing the number of unplanned pregnancies that subsequently result in unsafe abortions and by preventing pregnancies in those at high risk.
Birth control also improves child survival in the developing world by lengthening the time between pregnancies. In this population, outcomes are worse when a mother gets pregnant within eighteen months of a previous delivery. Delaying another pregnancy after a miscarriage, however, does not appear to alter risk and women are advised to attempt pregnancy in this situation whenever they are ready.
Teenage pregnancies, especially among younger teens, are at greater risk of adverse outcomes including early birth, low birth weight, and death of the infant. In 2012 in the United States 82% of pregnancies in those between the ages of 15 and 19 years old were unplanned. Comprehensive sex education and access to birth control are effective in decreasing pregnancy rates in this age group.
Birth control methods, especially hormonal methods, can also have undesirable side effects. The intensity of side effects can range from minor to debilitating and varies with individual experiences. These most commonly include changes in menstruation regularity and flow, nausea, breast tenderness, headaches, weight gain, and mood changes (specifically an increase in depression and anxiety). Additionally, hormonal contraception can contribute to bone mineral density loss, impaired glucose metabolism, increased risk of venous thromboembolism. Comprehensive sex education and transparent discussion of birth control side effects and contraindications between healthcare provider and patient is imperative.
=== Finances ===
In the developing world, birth control increases economic growth due to there being fewer dependent children and thus more women participating in or increased contribution to the workforce – as they are usually the primary caregiver for children. Women's earnings, assets, body mass index, and their children's schooling and body mass index all improve with greater access to birth control. Family planning, via the use of modern birth control, is one of the most cost-effective health interventions. For every dollar spent, the United Nations estimates that two to six dollars are saved. These cost savings are related to preventing unplanned pregnancies and decreasing the spread of sexually transmitted illnesses. While all methods are beneficial financially, the use of copper IUDs resulted in the greatest savings.
The total medical cost for a pregnancy, delivery, and care of a newborn in the United States is on average $21,000 for a vaginal delivery and $31,000 for a caesarean delivery as of 2012. In most other countries, the cost is less than half. For a child born in 2011, an average US family will spend $235,000 over 17 years to raise them.
== Prevalence ==
Globally, as of 2009, approximately 60% of those who are married and able to have children use birth control. How frequently different methods are used varies widely between countries. The most common method in the developed world is condoms and oral contraceptives, while in Africa it is oral contraceptives and in Latin America and Asia it is sterilization. In the developing world overall, 35% of birth control is via female sterilization, 30% is via IUDs, 12% is via oral contraceptives, 11% is via condoms, and 4% is via male sterilization.
While less used in the developed countries than the developing world, the number of women using IUDs as of 2007 was more than 180 million. Avoiding sex when fertile is used by about 3.6% of women of childbearing age, with usage as high as 20% in areas of South America. As of 2005, 12% of couples are using a male form of birth control (either condoms or a vasectomy) with higher rates in the developed world. Usage of male forms of birth control has decreased between 1985 and 2009. Contraceptive use among women in Sub-Saharan Africa has risen from about 5% in 1991 to about 30% in 2006.
As of 2012, 57% of women of childbearing age want to avoid pregnancy (867 of 1,520 million). About 222 million women, however, were not able to access birth control, 53 million of whom were in sub-Saharan Africa and 97 million of whom were in Asia. This results in 54 million unplanned pregnancies and nearly 80,000 maternal deaths a year. Part of the reason that many women are without birth control is that many countries limit access due to religious or political reasons, while another contributor is poverty. Due to restrictive abortion laws in Sub-Saharan Africa, many women turn to unlicensed abortion providers for unintended pregnancy, resulting in about 2–4% obtaining unsafe abortions each year.
== History ==
=== Early history ===
The Egyptian Ebers Papyrus from 1550 BC and the Kahun Papyrus from 1850 BC have within them some of the earliest documented descriptions of birth control: the use of honey, acacia leaves and lint to be placed in the vagina to block sperm. Silphium, a species of giant fennel native to north Africa, may have been used as birth control in ancient Greece and the ancient Near East. Due to its desirability, by the first century AD, it had become so rare that it was worth more than its weight in silver and, by late antiquity, it was fully extinct. Most methods of birth control used in antiquity were probably ineffective.
The ancient Greek philosopher Aristotle (c. 384–322 BC) recommended applying cedar oil to the womb before intercourse, a method which was probably only effective on occasion. A Hippocratic text On the Nature of Women recommended that a woman drink a copper salt dissolved in water, which it claimed would prevent pregnancy for a year. This method was not only ineffective but also dangerous, as the later medical writer Soranus of Ephesus (c. 98–138 AD) pointed out. Soranus attempted to list reliable methods of birth control based on rational principles. He rejected the use of superstition and amulets and instead prescribed mechanical methods such as vaginal plugs and pessaries using wool as a base covered in oils or other gummy substances. Many of Soranus's methods were probably also ineffective.
In medieval Europe, any effort to halt pregnancy was deemed immoral by the Catholic Church, although it is believed that women of the time still used some birth control measures, such as coitus interruptus and inserting lily root and rue into the vagina. Women in the Middle Ages were also encouraged to tie weasel testicles around their thighs during sex to prevent pregnancy. The oldest condoms discovered to date were recovered in the ruins of Dudley Castle in England, and are dated back to 1640. They were made of animal gut, and were most likely used to prevent the spread of sexually transmitted infections during the English Civil War. Casanova, living in 18th-century Italy, described the use of a lambskin covering to prevent pregnancy; however, condoms only became widely available in the 20th century.
=== Birth control movement ===
The birth control movement developed during the 19th and early 20th centuries. The Malthusian League, based on the ideas of Thomas Malthus, was established in 1877 in the United Kingdom to educate the public about the importance of family planning and to advocate for getting rid of penalties for promoting birth control. It was founded during the "Knowlton trial" of Annie Besant and Charles Bradlaugh, who were prosecuted for publishing on various methods of birth control.
In the United States, Margaret Sanger and Otto Bobsein popularized the phrase "birth control" in 1914. Sanger primarily advocated for birth control on the idea that it would prevent women from seeking unsafe abortions, but during her lifetime, she began to campaign for it on the grounds that it would reduce mental and physical defects. She was mainly active in the United States but had gained an international reputation by the 1930s. At the time, under the Comstock Law, distribution of birth control information was illegal. She jumped bail in 1914 after her arrest for distributing birth control information and left the United States for the United Kingdom. In the U.K., Sanger, influenced by Havelock Ellis, further developed her arguments for birth control. She believed women needed to enjoy sex without fearing a pregnancy. During her time abroad, Sanger also saw a more flexible diaphragm in a Dutch clinic, which she thought was a better form of contraceptive. Once Sanger returned to the United States, she established a short-lived birth-control clinic with the help of her sister, Ethel Bryne, based in the Brownville section of Brooklyn, New York in 1916. It was shut down after eleven days and resulted in her arrest. The publicity surrounding the arrest, trial, and appeal sparked birth control activism across the United States. Besides her sister, Sanger was helped in the movement by her first husband, William Sanger, who distributed copies of "Family Limitation." Sanger's second husband, James Noah H. Slee, would later become involved in the movement, acting as its main funder. Sanger also contributed to the funding of research into hormonal contraceptives in the 1950s. She helped fund research by John Rock and biologist Gregory Pincus that resulted in the first hormonal contraceptive pill, later called Enovid. The first human trials of the pill were done on patients in the Worcester State Psychiatric Hospital, after which clinical testing was done in Puerto Rico before Enovid was approved for use in the U.S.. The people participating in these trials were not fully informed of the medical implications of the pill and often had minimal to no other family planning options. The newly approved birth control method was not made available to the participants after the trials, and contraceptives are still not widely accessible in Puerto Rico.
The increased use of birth control was seen by some as a form of social decay. A decrease of fertility was seen as a negative. Throughout the Progressive Era (1890–1920), there was an increase of voluntary associations aiding the contraceptive movement. These organizations failed to enlist more than 100,000 women because the use of birth control was often compared to eugenics; however, women were seeking a community with like-minded women. The ideology that surrounded birth control started to gain traction during the Progressive Era due to voluntary associations establishing community. Birth control was unlike the Victorian Era because women wanted to manage their sexuality. The use of birth control was another form of self-interest women clung to. This was seen as women began to gravitate towards strong figures, like the Gibson Girl.
The first permanent birth-control clinic was established in Britain in 1921 by Marie Stopes working with the Malthusian League. The clinic, run by midwives and supported by visiting doctors, offered women's birth-control advice and taught them the use of a cervical cap. Her clinic made contraception acceptable during the 1920s by presenting it in scientific terms. In 1921, Sanger founded the American Birth Control League, which later became the Planned Parenthood Federation of America. In 1924 the Society for the Provision of Birth Control Clinics was founded to campaign for municipal clinics; this led to the opening of a second clinic in Greengate, Salford in 1926. Throughout the 1920s, Stopes and other feminist pioneers, including Dora Russell and Stella Browne, played a major role in breaking down taboos about sex. In April 1930 the Birth Control Conference assembled 700 delegates and was successful in bringing birth control and abortion into the political sphere – three months later, the Ministry of Health, in the United Kingdom, allowed local authorities to give birth-control advice in welfare centres.
The National Birth Control Association was founded in Britain in 1931 and became the Family Planning Association eight years later. The Association amalgamated several British birth control-focused groups into 'a central organisation' for administering and overseeing birth control in Britain. The group incorporated the Birth Control Investigation Committee, a collective of physicians and scientists that was founded to investigate scientific and medical aspects of contraception with 'neutrality and impartiality'. Subsequently, the Association effected a series of 'pure' and 'applied' product and safety standards that manufacturers must meet to ensure their contraceptives could be prescribed as part of the Association's standard two-part-technique combining 'a rubber appliance to protect the mouth of the womb' with a 'chemical preparation capable of destroying... sperm'. Between 1931 and 1959, the Association founded and funded a series of tests to assess chemical efficacy and safety and rubber quality. These tests became the basis for the Association's Approved List of contraceptives, which was launched in 1937, and went on to become an annual publication that the expanding network of FPA clinics relied upon as a means to 'establish facts [about contraceptives] and to publish these facts as a basis on which a sound public and scientific opinion can be built'.
In 1936, the United States Court of Appeals for the Second Circuit ruled in United States v. One Package of Japanese Pessaries that medically prescribing contraception to save a person's life or well-being was not illegal under the Comstock Laws. Following this decision, the American Medical Association Committee on Contraception revoked its 1936 statement condemning birth control. A national survey in 1937 showed 71 percent of the adult population supported the use of contraception. By 1938, 374 birth control clinics were running in the United States despite their advertisement still being illegal. First Lady Eleanor Roosevelt publicly supported birth control and family planning. The restrictions on birth control in the Comstock laws were effectively rendered null and void by Supreme Court decisions Griswold v. Connecticut (1965) and Eisenstadt v. Baird (1972). In 1966, President Lyndon B. Johnson started endorsing public funding for family planning services, and the Federal Government began subsidizing birth control services for low-income families. The Affordable Care Act, passed into law on March 23, 2010, under President Barack Obama, requires all plans in the Health Insurance Marketplace to cover contraceptive methods. These include barrier methods, hormonal methods, implanted devices, emergency contraceptives, and sterilization procedures.
=== Modern methods ===
In 1909, Richard Richter developed the first intrauterine device made from silkworm gut, which was further developed and marketed in Germany by Ernst Gräfenberg in the late 1920s. In 1951, an Austrian-born American chemist, named Carl Djerassi at Syntex in Mexico City made the hormones in progesterone pills using Mexican yams (Dioscorea mexicana). Djerassi had chemically created the pill but was not equipped to distribute it to patients. Meanwhile, Gregory Pincus and John Rock with help from the Planned Parenthood Federation of America developed the first birth control pills in the 1950s, such as mestranol/noretynodrel, which became publicly available in the 1960s through the Food and Drug Administration under the name Enovid. Medical abortion became an alternative to surgical abortion with the availability of prostaglandin analogs in the 1970s and mifepristone in the 1980s.
== Society and culture ==
=== Legal positions ===
Human rights agreements require most governments to provide family planning and contraceptive information and services. These include the requirement to create a national plan for family planning services, remove laws that limit access to family planning, ensure that a wide variety of safe and effective birth control methods are available including emergency contraceptives, make sure there are appropriately trained healthcare providers and facilities at an affordable price, and create a process to review the programs implemented. If governments fail to do the above it may put them in breach of binding international treaty obligations.
In the United States, the 1965 Supreme Court decision Griswold v. Connecticut overturned a state law prohibiting the dissemination of contraception information based on a constitutional right to privacy for marital relationships. In 1972, Eisenstadt v. Baird extended this right to privacy to single people.
In 2010, the United Nations launched the Every Woman Every Child movement to assess the progress toward meeting women's contraceptive needs. The initiative has set a goal of increasing the number of users of modern birth control by 120 million women in the world's 69 poorest countries by 2020. Additionally, they aim to eradicate discrimination against girls and young women who seek contraceptives. The American Congress of Obstetricians and Gynecologists (ACOG) recommended in 2014 that oral birth control pills should be over the counter medications.
Since at least the 1870s, American religious, medical, legislative, and legal commentators have debated contraception laws. Ana Garner and Angela Michel have found that in these discussions men often attach reproductive rights to moral and political matters, as part of an ongoing attempt to regulate human bodies. In press coverage between 1873 and 2013 they found a divide between institutional ideology and real-life experiences of women.
=== Religious views ===
Religions vary widely in their views of the ethics of birth control. The Roman Catholic Church re-affirmed its teachings in 1968 that only natural family planning is permissible, although large numbers of Catholics in developed countries accept and use modern methods of birth control. The Greek Orthodox Church admits a possible exception to its traditional teaching forbidding the use of artificial contraception, if used within marriage for certain purposes, including the spacing of births. Among Protestants, there is a wide range of views from supporting none, such as in the Quiverfull movement, to allowing all methods of birth control. Views in Judaism range from the stricter Orthodox sect, which heavily restricts the use of birth control, to the more relaxed Reform sect, which allows most. Hindus may use both natural and modern contraceptives. A common Buddhist view is that preventing conception is acceptable, while intervening after conception has occurred is not. In Islam, contraceptives are allowed if they do not threaten health, although their use is discouraged by some.
=== World Contraception Day ===
September 26 is World Contraception Day, devoted to raising awareness and improving education about sexual and reproductive health, with a vision of a world where every pregnancy is wanted. It is supported by a group of governments and international NGOs, including the Office of Population Affairs, the Asian Pacific Council on Contraception, Centro Latinamericano Salud y Mujer, the European Society of Contraception and Reproductive Health, the German Foundation for World Population, the International Federation of Pediatric and Adolescent Gynecology, International Planned Parenthood Federation, the Marie Stopes International, Population Services International, the Population Council, the United States Agency for International Development (USAID), and Women Deliver.
=== Misconceptions ===
There are a number of common misconceptions regarding sex and pregnancy. Douching after sexual intercourse is not an effective form of birth control. Additionally, it is associated with a number of health problems and thus is not recommended. Women can become pregnant the first time they have sexual intercourse and in any sexual position. It is possible, although not very likely, to become pregnant during menstruation. Contraceptive use, regardless of its duration and type, does not have a negative effect on the ability of women to conceive following termination of use and does not significantly delay fertility. Women who use oral contraceptives for a longer duration may have a slightly lower rate of pregnancy than do women using oral contraceptives for a shorter period of time, possibly due to fertility decreasing with age.
=== Accessibility ===
Access to birth control may be affected by finances and the laws within a region or country. In the United States African American, Hispanic, and young women are disproportionately affected by limited access to birth control, as a result of financial disparity. For example, Hispanic and African American women often lack insurance coverage and are more often poor. New immigrants in the United States are not offered preventive care such as birth control.
In the United Kingdom contraception can be obtained free of charge via contraception clinics, sexual health or GUM (genitourinary medicine) clinics, via some GP surgeries, some young people's services and pharmacies.
In September 2021, France announced that women aged under 25 in France will be offered free contraception from 2022. It was elaborated that they "would not be charged for medical appointments, tests, or other medical procedures related to birth control" and that this would "cover hormonal contraception, biological tests that go with it, the prescription of contraception, and all care related to this contraception".
From August 2022 onwards contraception for women aged between 17 and 25 years will be free in the Republic of Ireland.
==== Public provisioning for contraception ====
In most parts of the world, the political attitude to contraception determines whether and how much state provisioning of contraceptive care occurs. In the United States, for example, the Republican party and the Democratic party have held opposite positions, contributing to continuous policy shifts over the years. In the 2010s, policies, and attitudes to contraceptive care shifted abruptly between Obama's and Trump's administrations. The Trump administration extensively overturned the efforts for contraceptive care, and reduced federal spending, compared to efforts and funding during the Obama administration.
==== Advocacy ====
Free the Pill, a collaboration between Advocates for Youth and Ibis Reproductive Health are working to bring birth control over-the-counter, covered by insurance with no age-restriction throughout the United States.
==== Approval ====
On July 13, 2023, the first US daily oral nonprescription over-the-counter birth control pill was approved for manufacturing by the FDA. The pill, Opill is expected to be more effective in preventing unintended pregnancies than condoms are. Opill is expected to be available in 2024 but the price has yet to be set. Perrigo, a pharmaceutical company based in Dublin is the manufacturer.
== Research directions ==
=== Females ===
Improvements in existing birth control methods are needed, as around half of those who get pregnant unintentionally are using birth control at the time. Many alterations of existing contraceptive methods are being studied, including a better female condom, an improved diaphragm, a patch containing only progestin, and a vaginal ring containing long-acting progesterone. This vaginal ring appears to be effective for three or four months and is currently available in some areas of the world. For women who rarely have sex, the taking of the hormonal birth control levonorgestrel around the time of sex looks promising.
A number of methods to perform sterilization via the cervix are being studied. One involves putting quinacrine in the uterus which causes scarring and infertility. While the procedure is inexpensive and does not require surgical skills, there are concerns regarding long-term side effects. Another substance, polidocanol, which functions in the same manner is being looked at. A device called Essure, which expands when placed in the fallopian tubes and blocks them, was approved in the United States in 2002. In 2016, a black boxed warning regarding potentially serious side effects was added, and in 2018, the device was discontinued.
=== Males ===
Despite high levels of interest in male contraception, progress has been stymied by a lack of industry involvement. Most funding for male contraceptive research is derived from government or philanthropic sources.
Several novel contraceptive methods based on hormonal and non-hormonal mechanisms of action are in various stages of research and development, up to and including clinical trials, including gels, pills, injectables, implants, wearables, and oral contraceptives.
Recent avenues of research include proteins and genes required for male fertility. For instance, the serine/threonine-protein kinase 33 (STK33) is a testis-enriched kinase that is indispensable for male fertility in humans and mice. An inhibitor of this kinase, CDD-2807, has recently been identified and induced reversible male infertility without measurable toxicity in mice. Such an inhibitor would be a potent male contraceptive if it passed safety and efficacy tests.
== Animals ==
Neutering or spaying, which involves removing some of the reproductive organs, is often carried out as a method of birth control in household pets. Many animal shelters require these procedures as part of adoption agreements. In large animals the surgery is known as castration.
Birth control is also being considered as an alternative to hunting as a means of controlling overpopulation in wild animals. Contraceptive vaccines have been found to be effective in a number of different animal populations. Kenyan goat herders fix a skirt, called an olor, to male goats to prevent them from impregnating female goats.
== See also ==
Human population planning
Immunocontraception
Misinformation related to birth control
== References ==
== Further reading ==
== External links ==
"WHO Fact Sheet". July 2017. Retrieved July 23, 2017.
"Birth Control Comparison Chart". Cedar River Clinics.
Bulk procurement of birth control by the World Health Organization | Wikipedia/Birth_control |
Tick–tock was a production model adopted in 2007 by chip manufacturer Intel. Under this model, every new process technology was first used to manufacture a die shrink of a proven microarchitecture (tick), followed by a new microarchitecture on the now-proven process (tock). It was replaced by the process–architecture–optimization model, which was announced in 2016 and is like a tick–tock cycle followed by an optimization phase. More generally, tick–tock is an engineering model which refreshes one half of a binary system each release cycle.
== History ==
Every "tick" represented a shrinking of the process technology of the previous microarchitecture (with minor changes, commonly to the caches, and rarely introducing new instructions, as with Broadwell in late 2014) and every "tock" designated a new microarchitecture. These occurred roughly every year to 18 months.
Due to the slowing rate of process improvements, in 2014 Intel created a "tock refresh" of a tock in the form of a smaller update to the microarchitecture not considered a new generation in and of itself. In March 2016, Intel announced in a Form 10-K report that it would always do this in future, deprecating the tick–tock cycle in favor of a three-step process–architecture–optimization model, under which three generations of processors are produced under a single manufacturing process, with the third generation out of three focusing on optimization.
After introducing the Skylake architecture on a 14 nm process in 2015, its first optimization was Kaby Lake in 2016. Intel then announced a second optimization, Coffee Lake, in 2017 making a total of four generations at 14 nm before the Palm Cove die shrink to 10 nm in 2018.
== Roadmap ==
=== Pentium 4 / Core / Xeon Roadmap ===
=== Atom roadmap ===
With Silvermont Intel tried to start Tick-Tock in Atom architecture but problems with the 10 nm process did not allow to do this. In the table below instead of Tick-Tock steps Process-Architecture-Optimization are used. There is no official confirmation that Intel uses Process-Architecture-Optimization for Atom but it allows us to understand what changes happened in each generation.
Note: There is further the Xeon Phi. It has up to now undergone four development steps with a current top model that got the code name Knights Landing (shortcut: KNL; the predecessor code names all had the leading term Knights in their name) that is derived from the Silvermont architecture as used for the Intel Atom series but realized in a shrunk 14 nm (FinFET) technology. In 2018, Intel announced that Knights Landing and all further Xeon Phi CPU models were discontinued. However, Intel's Sierra Forest and subsequent Atom-based Xeon CPUs are likely a spiritual successor to Xeon Phi.
=== Both ===
== See also ==
List of Intel CPU microarchitectures
Speculative execution CPU vulnerabilities
== References ==
== External links ==
"Intel Tick–Tock Model of Architecture & Silicon Cadence". intel.com. Intel Corporation.
Intel Tick–Tock Model at IDF 2009, Anandtech.com
"Intel Tick–Tock Model at IDF 2011" (PDF). intel.com. Intel Corporation. p. 21. | Wikipedia/Tick–tock_model |
Photolithography (also known as optical lithography) is a process used in the manufacturing of integrated circuits. It involves using light to transfer a pattern onto a substrate, typically a silicon wafer.
The process begins with a photosensitive material, called a photoresist, being applied to the substrate. A photomask that contains the desired pattern is then placed over the photoresist. Light is shone through the photomask, exposing the photoresist in certain areas. The exposed areas undergo a chemical change, making them either soluble or insoluble in a developer solution. After development, the pattern is transferred onto the substrate through etching, chemical vapor deposition, or ion implantation processes.
Ultraviolet (UV) light is typically used.
Photolithography processes can be classified according to the type of light used, including ultraviolet lithography, deep ultraviolet lithography, extreme ultraviolet lithography (EUVL), and X-ray lithography. The wavelength of light used determines the minimum feature size that can be formed in the photoresist.
Photolithography is the most common method for the semiconductor fabrication of integrated circuits ("ICs" or "chips"), such as solid-state memories and microprocessors. It can create extremely small patterns, down to a few nanometers in size. It provides precise control of the shape and size of the objects it creates. It can create patterns over an entire wafer in a single step, quickly and with relatively low cost. In complex integrated circuits, a wafer may go through the photolithographic cycle as many as 50 times. It is also an important technique for microfabrication in general, such as the fabrication of microelectromechanical systems. However, photolithography cannot be used to produce masks on surfaces that are not perfectly flat. And, like all chip manufacturing processes, it requires extremely clean operating conditions.
Photolithography is a subclass of microlithography, the general term for processes that generate patterned thin films. Other technologies in this broader class include the use of steerable electron beams, or more rarely, nanoimprinting, interference, magnetic fields, or scanning probes. On a broader level, it may compete with directed self-assembly of micro- and nanostructures.
Photolithography shares some fundamental principles with photography in that the pattern in the photoresist is created by exposing it to light — either directly by projection through a lens, or by illuminating a mask placed directly over the substrate, as in contact printing. The technique can also be seen as a high precision version of the method used to make printed circuit boards. The name originated from a loose analogy with the traditional photographic method of producing plates for lithographic printing on paper; however, subsequent stages in the process have more in common with etching than with traditional lithography.
Conventional photoresists typically consist of three components: resin, sensitizer, and solvent.
== Etymology ==
The root words photo, litho, and graphy all have Greek origins, with the meanings 'light', 'stone' and 'writing' respectively. As suggested by the name compounded from them, photolithography is a printing method (originally based on the use of limestone printing plates) in which light plays an essential role.
== History ==
In the 1820s, Nicephore Niepce invented a photographic process that used Bitumen of Judea, a natural asphalt, as the first photoresist. A thin coating of the bitumen on a sheet of metal, glass or stone became less soluble where it was exposed to light; the unexposed parts could then be rinsed away with a suitable solvent, baring the material beneath, which was then chemically etched in an acid bath to produce a printing plate. The light-sensitivity of bitumen was very poor and very long exposures were required, but despite the later introduction of more sensitive alternatives, its low cost and superb resistance to strong acids prolonged its commercial life into the early 20th century.
In 1940, Oskar Süß created a positive photoresist by using diazonaphthoquinone, which worked in the opposite manner: the coating was initially insoluble and was rendered soluble where it was exposed to light. In 1954, Louis Plambeck Jr. developed the Dycryl polymeric letterpress plate, which made the platemaking process faster. Development of photoresists used to be carried out in batches of wafers (batch processing) dipped into a bath of developer, but modern process offerings do development one wafer at a time (single wafer processing) to improve process control.
In 1957 Jules Andrus patented a photolitographic process for semiconductor fabrication, while working at Bell Labs. At the same time Moe Abramson and Stanislaus Danko of the US Army Signal Corps developed a technique for printing circuits.
In 1952, the U.S. military assigned Jay W. Lathrop and James R. Nall at the National Bureau of Standards (later the U.S. Army Diamond Ordnance Fuze Laboratory, which eventually merged to form the now-present Army Research Laboratory) with the task of finding a way to reduce the size of electronic circuits in order to better fit the necessary circuitry in the limited space available inside a proximity fuze. Inspired by the application of photoresist, a photosensitive liquid used to mark the boundaries of rivet holes in metal aircraft wings, Nall determined that a similar process can be used to protect the germanium in the transistors and even pattern the surface with light. During development, Lathrop and Nall were successful in creating a 2D miniaturized hybrid integrated circuit with transistors using this technique. In 1958, during the IRE Professional Group on Electron Devices (PGED) conference in Washington, D.C., they presented the first paper to describe the fabrication of transistors using photographic techniques and adopted the term "photolithography" to describe the process, marking the first published use of the term to describe semiconductor device patterning.
Despite the fact that photolithography of electronic components concerns etching metal duplicates, rather than etching stone to produce a "master" as in conventional lithographic printing, Lathrop and Nall chose the term "photolithography" over "photoetching" because the former sounded "high tech." A year after the conference, Lathrop and Nall's patent on photolithography was formally approved on June 9, 1959. Photolithography would later contribute to the development of the first semiconductor ICs as well as the first microchips.
== Process ==
A single iteration of photolithography combines several steps in sequence. Modern cleanrooms use automated, robotic wafer track systems to coordinate the process. The procedure described here omits some advanced treatments, such as thinning agents. The photolithography process is carried out by the wafer track and stepper/scanner, and the wafer track system and the stepper/scanner are installed side by side. Wafer track systems are also known as wafer coater/developer systems, which perform the same functions. Wafer tracks are named after the "tracks" used to carry wafers inside the machine, but modern machines do not use tracks.
=== Cleaning ===
If organic or inorganic contaminations are present on the wafer surface, they are usually removed by wet chemical treatment, e.g. the RCA clean procedure based on solutions containing hydrogen peroxide. Other solutions made with trichloroethylene, acetone or methanol can also be used to clean.
=== Preparation ===
The wafer is initially heated to a temperature sufficient to drive off any moisture that may be present on the wafer surface; 150 °C for ten minutes is sufficient. Wafers that have been in storage must be chemically cleaned to remove contamination. A liquid or gaseous "adhesion promoter", such as Bis(trimethylsilyl)amine ("hexamethyldisilazane", HMDS), is applied to promote adhesion of the photoresist to the wafer. The surface layer of silicon dioxide on the wafer reacts with HMDS to form tri-methylated silicon-dioxide, a highly water repellent layer not unlike the layer of wax on a car's paint. This water repellent layer prevents the aqueous developer from penetrating between the photoresist layer and the wafer's surface, thus preventing so-called lifting of small photoresist structures in the (developing) pattern. In order to ensure the development of the image, it is best covered and placed over a hot plate and let it dry while stabilizing the temperature at 120 °C.
=== Photoresist application ===
The wafer is covered with photoresist liquid by spin coating. Thus, the top layer of resist is quickly ejected from the wafer's edge while the bottom layer still creeps slowly radially along the wafer. In this way, any 'bump' or 'ridge' of resist is removed, leaving a very flat layer. However, viscous films may result in large edge beads which are areas at the edges of the wafer or photomask with increased resist thickness whose planarization has physical limits. Often, Edge bead removal (EBR) is carried out, usually with a nozzle, to remove this extra resist as it could otherwise cause particulate contamination.
Final thickness is also determined by the evaporation of liquid solvents from the resist. For very small, dense features (< 125 or so nm), lower resist thicknesses (< 0.5 microns) are needed to overcome collapse effects at high aspect ratios; typical aspect ratios are < 4:1.
The photoresist-coated wafer is then prebaked to drive off excess photoresist solvent, typically at 90 to 100 °C for 30 to 60 seconds on a hotplate. A BARC coating (Bottom Anti-Reflectant Coating) may be applied before the photoresist is applied, to avoid reflections from occurring under the photoresist and to improve the photoresist's performance at smaller semiconductor nodes such as 45 nm and below. Top Anti-Reflectant Coatings (TARCs) also exist. EUV lithography is unique in the sense it allows for the use of photoresists with metal oxides.
=== Exposure and developing ===
After prebaking, the photoresist is exposed to a pattern of intense light. The exposure to light causes a chemical change that allows some of the photoresist to be removed by a special solution, called "developer" by analogy with photographic developer. Positive photoresist, the most common type, becomes soluble in the developer when exposed; with negative photoresist, unexposed regions are soluble in the developer.
A post-exposure bake (PEB) is performed before developing, typically to help reduce standing wave phenomena caused by the destructive and constructive interference patterns of the incident light. In deep ultraviolet lithography, chemically amplified resist (CAR) chemistry is used. This resist is much more sensitive to PEB time, temperature, and delay, as the resist works by creating acid when it is hit by photons, and then undergoes an "exposure" reaction (creating acid, making the polymer soluble in the basic developer, and performing a chemical reaction catalyzed by acid) which mostly occurs in the PEB.
The develop chemistry is delivered on a spinner, much like photoresist. Developers originally often contained sodium hydroxide (NaOH). However, sodium is considered an extremely undesirable contaminant in MOSFET fabrication because it degrades the insulating properties of gate oxides (specifically, sodium ions can migrate in and out of the gate, changing the threshold voltage of the transistor and making it harder or easier to turn the transistor on over time). Metal-ion-free developers such as tetramethylammonium hydroxide (TMAH) are now used. The temperature of the developer might be tightly controlled using jacketed (dual walled) hoses to within 0.2 °C. The nozzle that coats the wafer with developer may influence the amount of developer that is necessary.
The resulting wafer is then "hard-baked" if a non-chemically amplified resist was used, typically at 120 to 180 °C for 20 to 30 minutes. The hard bake solidifies the remaining photoresist, to make a more durable protecting layer in future ion implantation, wet chemical etching, or plasma etching.
From preparation until this step, the photolithography procedure has been carried out by two machines: the photolithography stepper or scanner, and the coater/developer. The two machines are usually installed side by side, and are "linked" together.
=== Etching, implantation ===
In etching, a liquid ("wet") or plasma ("dry") chemical agent removes the uppermost layer of the substrate in the areas that are not protected by photoresist. In semiconductor fabrication, dry etching techniques are generally used, as they can be made anisotropic, in order to avoid significant undercutting of the photoresist pattern. This is essential when the width of the features to be defined is similar to or less than the thickness of the material being etched (i.e. when the aspect ratio approaches unity). Wet etch processes are generally isotropic in nature, which is often indispensable for microelectromechanical systems, where suspended structures must be "released" from the underlying layer.
The development of low-defectivity anisotropic dry-etch process has enabled the ever-smaller features defined photolithographically in the resist to be transferred to the substrate material.
=== Photoresist removal ===
After a photoresist is no longer needed, it must be removed from the substrate. This usually requires a liquid "resist stripper", which chemically alters the resist so that it no longer adheres to the substrate. Alternatively, the photoresist may be removed by a plasma containing oxygen, which oxidizes it. This process is called plasma ashing and resembles dry etching. The use of 1-Methyl-2-pyrrolidone (NMP) solvent for photoresist is another method used to remove an image. When the resist has been dissolved, the solvent can be removed by heating to 80 °C without leaving any residue.
== Exposure ("printing") systems ==
Exposure systems typically produce an image on the wafer using a photomask. The photomask blocks light in some areas and lets it pass in others. (Maskless lithography projects a precise beam directly onto the wafer without using a mask, but it is not widely used in commercial processes.) Exposure systems may be classified by the optics that transfer the image from the mask to the wafer.
Photolithography produces better thin film transistor structures than printed electronics, due to smoother printed layers, less wavy patterns, and more accurate drain-source electrode registration.
=== Contact and proximity ===
A contact aligner, the simplest exposure system, puts a photomask in direct contact with the wafer and exposes it to a uniform light. A proximity aligner puts a small gap of around 5 microns between the photomask and wafer. In both cases, the mask covers the entire wafer, and simultaneously patterns every die.
Contact printing/lithography is liable to damage both the mask and the wafer, and this was the primary reason it was abandoned for high volume production. Both contact and proximity lithography require the light intensity to be uniform across an entire wafer, and the mask to align precisely to features already on the wafer. As modern processes use increasingly large wafers, these conditions become increasingly difficult.
Research and prototyping processes often use contact or proximity lithography, because it uses inexpensive hardware and can achieve high optical resolution. The resolution in proximity lithography is approximately the square root of the product of the wavelength and the gap distance. Hence, except for projection lithography (see below), contact printing offers the best resolution, because its gap distance is approximately zero (neglecting the thickness of the photoresist itself). In addition, nanoimprint lithography may revive interest in this familiar technique, especially since the cost of ownership is expected to be low; however, the shortcomings of contact printing discussed above remain as challenges.
=== Projection ===
Very-large-scale integration (VLSI) lithography uses projection systems. Unlike contact or proximity masks, which cover an entire wafer, projection masks (known as "reticles") show only one die or an array of dies (known as a "field") in a portion of the wafer at a time. Projection exposure systems (steppers or scanners) project the mask onto the wafer many times, changing the position of the wafer with every projection, to create the complete pattern, fully patterning the wafer. The difference between steppers and scanners is that, during exposure, a scanner moves the photomask and the wafer simultaneously, while a stepper only moves the wafer. Contact, proximity and projection Mask aligners preceded steppers
and do not move the photomask nor the wafer during exposure and use masks that cover the entire wafer. Immersion lithography scanners use a layer of Ultrapure water between the lens and the wafer to increase resolution. An alternative to photolithography is nanoimprint lithography. The maximum size of the image that can be projected onto a wafer is known as the reticle limit.
== Photomasks ==
The image for the mask originates from a computerized data file. This data file is converted to a series of polygons and written onto a square of fused quartz substrate covered with a layer of chromium using a photolithographic process. A laser beam (laser writer) or a beam of electrons (e-beam writer) is used to expose the pattern defined by the data file and travels over the surface of the substrate in either a vector or raster scan manner. Where the photoresist on the mask is exposed, the chrome can be etched away, leaving a clear path for the illumination light in the stepper/scanner system to travel through.
== Resolution in projection systems ==
The ability to project a clear image of a small feature onto the wafer is limited by the wavelength of the light that is used, and the ability of the reduction lens system to capture enough diffraction orders from the illuminated mask. Current state-of-the-art photolithography tools use deep ultraviolet (DUV) light from excimer lasers with wavelengths of 248 (KrF) and
193 (ArF) nm (the dominant lithography technology today is thus also called "excimer laser lithography"), which allow minimum feature sizes down to 50 nm. Excimer laser lithography has thus played a critical role in the continued advance of the Moore's Law for the last 20 years (see below).
The minimum feature size that a projection system can print is given approximately by:
C
D
=
k
1
⋅
λ
N
A
{\displaystyle CD=k_{1}\cdot {\frac {\lambda }{NA}}}
where
C
D
{\displaystyle \,CD}
is the minimum feature size (also called the critical dimension, target design rule, or "half-pitch"),
λ
{\displaystyle \,\lambda }
is the wavelength of light used, and
N
A
{\displaystyle \,NA}
is the numerical aperture of the lens as seen from the wafer.
k
1
{\displaystyle \,k_{1}}
(commonly called k1 factor) is a coefficient that encapsulates process-related factors and typically equals 0.4 for production. (
k
1
{\displaystyle \,k_{1}}
is actually a function of process factors such as the angle of incident light on a reticle and the incident light intensity distribution. It is fixed per process.) The minimum feature size can be reduced by decreasing this coefficient through computational lithography.
According to this equation, minimum feature sizes can be decreased by decreasing the wavelength, and increasing the numerical aperture (to achieve a tighter focused beam and a smaller spot size). However, this design method runs into a competing constraint. In modern systems, the depth of focus is also a concern:
D
F
=
k
2
⋅
λ
N
A
2
{\displaystyle D_{F}=k_{2}\cdot {\frac {\lambda }{\,{NA}^{2}}}}
Here,
k
2
{\displaystyle \,k_{2}}
is another process-related coefficient. The depth of focus restricts the thickness of the photoresist and the depth of the topography on the wafer. Chemical mechanical polishing is often used to flatten topography before high-resolution lithographic steps.
From classical optics, k1=0.61 by the Rayleigh criterion. The image of two points separated by less than 1.22 wavelength/NA will not maintain that separation but will be larger due to the interference between the Airy discs of the two points. It must also be remembered, though, that the distance between two features can also change with defocus.
Resolution is also nontrivial in a two-dimensional context. For example, a tighter line pitch results in wider gaps (in the perpendicular direction) between the ends of the lines. More fundamentally, straight edges become rounded for shortened rectangular features, where both x and y pitches are near the resolution limit.
For advanced nodes, blur, rather than wavelength, becomes the key resolution-limiting factor. Minimum pitch is given by blur sigma/0.14. Blur is affected by dose as well as quantum yield, leading to a tradeoff with stochastic defects, in the case of EUV.
== Stochastic effects ==
As light consists of photons, at low doses the image quality ultimately depends on the photon number. This affects the use of extreme ultraviolet lithography or EUVL, which is limited to the use of low doses on the order of 20 photons/nm2.
This is due to fewer photons for the same energy dose for a shorter wavelength (higher energy per photon). With fewer photons making up the image, there is noise in the edge placement.
The stochastic effects would become more complicated with larger pitch patterns with more diffraction orders and using more illumination source points.
Secondary electrons in EUV lithography aggravate the stochastic characteristics.
== Light sources ==
Historically, photolithography has used ultraviolet light from gas-discharge lamps using mercury, sometimes in combination with noble gases such as xenon. These lamps produce light across a broad spectrum with several strong peaks in the ultraviolet range. This spectrum is filtered to select a single spectral line. From the early 1960s through the mid-1980s, Hg lamps had been used in lithography for their spectral lines at 436 nm ("g-line"), 405 nm ("h-line") and 365 nm ("i-line"). However, with the semiconductor industry's need for both higher resolution (to produce denser and faster chips) and higher throughput (for lower costs), lamp-based lithography tools were no longer able to meet the industry's high-end requirements.
This challenge was overcome in 1982 when excimer laser lithography was proposed and demonstrated at IBM by Kanti Jain. Excimer laser lithography machines (steppers and scanners) became the primary tools in microelectronics production, and has enabled minimum features sizes in chip manufacturing to shrink from 800 nanometers in 1990 to 7 nanometers in 2018. From an even broader scientific and technological perspective, in the 50-year history of the laser since its first demonstration in 1960, the invention and development of excimer laser lithography has been recognized as a major milestone.
The commonly used deep ultraviolet excimer lasers in lithography systems are the krypton fluoride (KrF) laser at 248 nm wavelength and the argon fluoride laser (ArF) at 193 nm wavelength. The primary manufacturers of excimer laser light sources in the 1980s were Lambda Physik (now part of Coherent, Inc.) and Lumonics. Since the mid-1990s Cymer Inc. has become the dominant supplier of excimer laser sources to the lithography equipment manufacturers, with Gigaphoton Inc. as their closest rival. Generally, an excimer laser is designed to operate with a specific gas mixture; therefore, changing wavelength is not a trivial matter, as the method of generating the new wavelength is completely different, and the absorption characteristics of materials change. For example, air begins to absorb significantly around the 193 nm wavelength; moving to sub-193 nm wavelengths would require installing vacuum pump and purge equipment on the lithography tools (a significant challenge). An inert gas atmosphere can sometimes be used as a substitute for a vacuum, to avoid the need for hard plumbing. Furthermore, insulating materials such as silicon dioxide, when exposed to photons with energy greater than the band gap, release free electrons and holes which subsequently cause adverse charging.
Optical lithography has been extended to feature sizes below 50 nm using the 193 nm ArF excimer laser and liquid immersion techniques. Also termed immersion lithography, this enables the use of optics with numerical apertures exceeding 1.0. The liquid used is typically ultra-pure, deionised water, which provides for a refractive index above that of the usual air gap between the lens and the wafer surface. The water is continually circulated to eliminate thermally-induced distortions. Water will only allow NA's of up to ~1.4, but fluids with higher refractive indices would allow the effective NA to be increased further.
Experimental tools using the 157 nm wavelength from the F2 excimer laser in a manner similar to current exposure systems have been built. These were once targeted to succeed 193 nm lithography at the 65 nm feature size node but have now all but been eliminated by the introduction of immersion lithography. This was due to persistent technical problems with the 157 nm technology and economic considerations that provided strong incentives for the continued use of 193 nm excimer laser lithography technology. High-index immersion lithography is the newest extension of 193 nm lithography to be considered. In 2006, features less than 30 nm were demonstrated by IBM using this technique. These systems used CaF2 calcium fluoride lenses. Immersion lithography at 157 nm was explored.
UV excimer lasers have been demonstrated to about 126 nm (for Ar2*). Mercury arc lamps are designed to maintain a steady DC current of 50 to 150 Volts, however excimer lasers have a higher resolution. Excimer lasers are gas-based light systems that are usually filled with inert and halide gases (Kr, Ar, Xe, F and Cl) that are charged by an electric field. The higher the frequency, the greater the resolution of the image. KrF lasers are able to function at a frequency of 4 kHz . In addition to running at a higher frequency, excimer lasers are compatible with more advanced machines than mercury arc lamps are. They are also able to operate from greater distances (up to 25 meters) and are able to maintain their accuracy with a series of mirrors and antireflective-coated lenses. By setting up multiple lasers and mirrors, the amount of energy loss is minimized, also since the lenses are coated with antireflective material, the light intensity remains relatively the same from when it left the laser to when it hits the wafer.
Lasers have been used to indirectly generate non-coherent extreme UV (EUV) light at 13.5 nm for extreme ultraviolet lithography. The EUV light is not emitted by the laser, but rather by a tin or xenon plasma which is excited by an excimer or CO2 laser. This technique does not require a synchrotron, and EUV sources, as noted, do not produce coherent light. However vacuum systems and a number of novel technologies (including much higher EUV energies than are now produced) are needed to work with UV at the edge of the X-ray spectrum (which begins at 10 nm). As of 2020, EUV is in mass production use by leading edge foundries such as TSMC and Samsung.
Theoretically, an alternative light source for photolithography, especially if and when wavelengths continue to decrease to extreme UV or X-ray, is the free-electron laser (or one might say xaser for an X-ray device). Free-electron lasers can produce high quality beams at arbitrary wavelengths.
Visible and infrared femtosecond lasers were also applied for lithography. In that case photochemical reactions are initiated by multiphoton absorption. Usage of these light sources have a lot of benefits, including possibility to manufacture true 3D objects and process non-photosensitized (pure) glass-like materials with superb optical resiliency.
== Experimental methods ==
Photolithography has been defeating predictions of its demise for many years. For instance, by the early 1980s, many in the semiconductor industry had come to believe that features smaller than 1 micron could not be printed optically. Modern techniques using excimer laser lithography already print features with dimensions a fraction of the wavelength of light used – an amazing optical feat. New techniques such as immersion lithography, dual-tone resist and multiple patterning continue to improve the resolution of 193 nm lithography. Meanwhile, current research is exploring alternatives to conventional UV, such as electron beam lithography, X-ray lithography, extreme ultraviolet lithography and ion projection lithography. Extreme ultraviolet lithography has entered mass production use, as of 2018 by Samsung and other manufacturers have followed suit.
Massively parallel electron beam lithography has been explored as an alternative to photolithography, and was tested by TSMC, but it did not succeed and the technology from the main developer of the technique, MAPPER, was purchased by ASML, although electron beam lithography was at one point used in chip production by IBM. Electron beam lithography is only used in niche applications such as photomask production.
== Economy ==
In 2001 NIST publication has reported that photolithography process constituted about 35% of total cost of a wafer processing costs.: 11
In 2021, the photolithography industry was valued over 8 billion USD.
== See also ==
Dip-pen nanolithography
Soft lithography
Magnetolithography
Nanochannel glass materials
Stereolithography, a macroscale process used to produce three-dimensional shapes
Wafer foundry
Chemistry of photolithography
Computational lithography
ASML Holding
Alvéole Lab
Semiconductor device fabrication
== References ==
== External links ==
BYU Photolithography Resources
Semiconductor Lithography – an overview of lithography
Optical Lithography Introduction – IBM site with lithography-related articles | Wikipedia/Photolithography |
A vaccine is a biological preparation that provides active acquired immunity to a particular infectious or malignant disease. The safety and effectiveness of vaccines has been widely studied and verified. A vaccine typically contains an agent that resembles a disease-causing microorganism and is often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins. The agent stimulates the body's immune system to recognize the agent as a threat, destroy it, and recognize further and destroy any of the microorganisms associated with that agent that it may encounter in the future.
Vaccines can be prophylactic (to prevent or alleviate the effects of a future infection by a natural or "wild" pathogen), or therapeutic (to fight a disease that has already occurred, such as cancer). Some vaccines offer full sterilizing immunity, in which infection is prevented.
The administration of vaccines is called vaccination. Vaccination is the most effective method of preventing infectious diseases; widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the restriction of diseases such as polio, measles, and tetanus from much of the world. The World Health Organization (WHO) reports that licensed vaccines are available for twenty-five different preventable infections.
The first recorded use of inoculation to prevent smallpox (see variolation) occurred in the 16th century in China, with the earliest hints of the practice in China coming during the 10th century. It was also the first disease for which a vaccine was produced. The folk practice of inoculation against smallpox was brought from Turkey to Britain in 1721 by Lady Mary Wortley Montagu.
The terms vaccine and vaccination are derived from Variolae vaccinae (smallpox of the cow), the term devised by Edward Jenner (who both developed the concept of vaccines and created the first vaccine) to denote cowpox. He used the phrase in 1798 for the long title of his Inquiry into the Variolae vaccinae Known as the Cow Pox, in which he described the protective effect of cowpox against smallpox. In 1881, to honor Jenner, Louis Pasteur proposed that the terms should be extended to cover the new protective inoculations then being developed. The science of vaccine development and production is termed vaccinology.
== Effectiveness ==
There is overwhelming scientific consensus that vaccines are a very safe and effective way to fight and eradicate infectious diseases. The immune system recognizes vaccine agents as foreign, destroys them, and "remembers" them. When the virulent version of an agent is encountered, the body recognizes the protein coat on the agent, and thus is prepared to respond, by first neutralizing the target agent before it can enter cells, and secondly by recognizing and destroying infected cells before that agent can multiply to vast numbers.
In 1958, there were 763,094 cases of measles in the United States; 552 deaths resulted. After the introduction of new vaccines, the number of cases dropped to fewer than 150 per year (median of 56). In early 2008, there were 64 suspected cases of measles. Fifty-four of those infections were associated with importation from another country, although only thirteen percent were actually acquired outside the United States; 63 of the 64 individuals either had never been vaccinated against measles or were uncertain whether they had been vaccinated.
The measles vaccine is estimated to prevent a million deaths every year.
Vaccines led to the eradication of smallpox, one of the most contagious and deadly diseases in humans. Other diseases such as rubella, polio, measles, mumps, chickenpox, and typhoid are nowhere near as common as they were a hundred years ago thanks to widespread vaccination programs. As long as the vast majority of people are vaccinated, it is much more difficult for an outbreak of disease to occur, let alone spread. This effect is called herd immunity. Polio, which is transmitted only among humans, is targeted by an extensive eradication campaign that has seen endemic polio restricted to only parts of three countries (Afghanistan, Nigeria, and Pakistan). However, the difficulty of reaching all children, cultural misunderstandings, and disinformation have caused the anticipated eradication date to be missed several times.
Vaccines also help prevent the development of antibiotic resistance. For example, by greatly reducing the incidence of pneumonia caused by Streptococcus pneumoniae, vaccine programs have greatly reduced the prevalence of infections resistant to penicillin or other first-line antibiotics.
=== Limitations ===
Limitations to their effectiveness, nevertheless, exist. Sometimes, protection fails for vaccine-related reasons such as failures in vaccine attenuation, vaccination regimens or administration.
Failure may also occur for host-related reasons if the host's immune system does not respond adequately or at all. Host-related lack of response occurs in an estimated 2-10% of individuals, due to factors including genetics, immune status, age, health and nutritional status. One type of primary immunodeficiency disorder resulting in genetic failure is X-linked agammaglobulinemia, in which the absence of an enzyme essential for B cell development prevents the host's immune system from generating antibodies to a pathogen.
Host–pathogen interactions and responses to infection are dynamic processes involving multiple pathways in the immune system. A host does not develop antibodies instantaneously: while the body's innate immunity may be activated in as little as twelve hours, adaptive immunity can take 1–2 weeks to fully develop. During that time, the host can still become infected.
Once antibodies are produced, they may promote immunity in any of several ways, depending on the class of antibodies involved. Their success in clearing or inactivating a pathogen will depend on the amount of antibodies produced and on the extent to which those antibodies are effective at countering the strain of the pathogen involved, since different strains may be differently susceptible to a given immune reaction.
In some cases vaccines may result in partial immune protection (in which immunity is less than 100% effective but still reduces risk of infection) or in temporary immune protection (in which immunity wanes over time) rather than full or permanent immunity. They can still raise the reinfection threshold for the population as a whole and make a substantial impact. They can also mitigate the severity of infection, resulting in a lower mortality rate, lower morbidity, faster recovery from illness, and a wide range of other effects.
Those who are older often display less of a response than those who are younger, a pattern known as Immunosenescence.
Adjuvants commonly are used to boost immune response, particularly for older people whose immune response to a simple vaccine may have weakened.
The efficacy or performance of the vaccine is dependent on several factors:
the disease itself (for some diseases vaccination performs better than for others)
the strain of vaccine (some vaccines are specific to, or at least most effective against, particular strains of the disease)
whether the vaccination schedule has been properly observed.
idiosyncratic response to vaccination; some individuals are "non-responders" to certain vaccines, meaning that they do not generate antibodies even after being vaccinated correctly.
assorted factors such as ethnicity, age, or genetic predisposition.
If a vaccinated individual does develop the disease vaccinated against (breakthrough infection), the disease is likely to be less severe and less transmissible than in unvaccinated cases.
Important considerations in an effective vaccination program:
careful modeling to anticipate the effect that an immunization campaign will have on the epidemiology of the disease in the medium to long term
ongoing surveillance for the relevant disease following introduction of a new vaccine
maintenance of high immunization rates, even when a disease has become rare
== Safety ==
Vaccinations given to children, adolescents, or adults are generally safe. Adverse effects, if any, are generally mild. The rate of side effects depends on the vaccine in question. Some common side effects include fever, pain around the injection site, and muscle aches. Additionally, some individuals may be allergic to ingredients in the vaccine. The MMR vaccine is rarely associated with febrile seizures.
Host-("vaccinee")-related determinants that render a person susceptible to infection, such as genetics, health status (underlying disease, nutrition, pregnancy, sensitivities or allergies), immune competence, age, and economic impact or cultural environment can be primary or secondary factors affecting the severity of infection and response to a vaccine. Elderly (above age 60), allergen-hypersensitive, and obese people have susceptibility to compromised immunogenicity, which prevents or inhibits vaccine effectiveness, possibly requiring separate vaccine technologies for these specific populations or repetitive booster vaccinations to limit virus transmission.
Severe side effects are extremely rare. Varicella vaccine is rarely associated with complications in immunodeficient individuals, and rotavirus vaccines are moderately associated with intussusception.
At least 19 countries have no-fault compensation programs to provide compensation for those with severe adverse effects of vaccination. The United States' program is known as the National Childhood Vaccine Injury Act, and the United Kingdom employs the Vaccine Damage Payment.
== Types ==
Vaccines typically contain attenuated, inactivated or dead organisms or purified products derived from them. There are several types of vaccines in use. These represent different strategies used to try to reduce the risk of illness while retaining the ability to induce a beneficial immune response.
=== Attenuated ===
Some vaccines contain live, attenuated microorganisms. Many of these are active viruses that have been cultivated under conditions that disable their virulent properties, or that use closely related but less dangerous organisms to produce a broad immune response. Although most attenuated vaccines are viral, some are bacterial in nature. Examples include the viral diseases yellow fever, measles, mumps, and rubella, and the bacterial disease typhoid. The live Mycobacterium tuberculosis vaccine developed by Calmette and Guérin is not made of a contagious strain but contains a virulently modified strain called "BCG" used to elicit an immune response to the vaccine. The live attenuated vaccine containing strain Yersinia pestis EV is used for plague immunization. Attenuated vaccines have some advantages and disadvantages. Attenuated, or live, weakened, vaccines typically provoke more durable immunological responses. Attenuated vaccines also elicit a cellular and humoral response. However, they may not be safe for use in immunocompromised individuals, and on rare occasions mutate to a virulent form and cause disease.
=== Inactivated ===
Some vaccines contain microorganisms that have been killed or inactivated by physical or chemical means. Examples include IPV (polio vaccine), hepatitis A vaccine, rabies vaccine and most influenza vaccines.
=== Toxoid ===
Toxoid vaccines are made from inactivated toxic compounds that cause illness rather than the microorganism. Examples of toxoid-based vaccines include tetanus and diphtheria. Not all toxoids are for microorganisms; for example, Crotalus atrox toxoid is used to vaccinate dogs against rattlesnake bites.
=== Subunit ===
Rather than introducing an inactivated or attenuated microorganism to an immune system (which would constitute a "whole-agent" vaccine), a subunit vaccine uses a fragment of it to create an immune response. One example is the subunit vaccine against hepatitis B, which is composed of only the surface proteins of the virus (previously extracted from the blood serum of chronically infected patients but now produced by recombination of the viral genes into yeast). Other examples include the Gardasil virus-like particle human papillomavirus (HPV) vaccine, the hemagglutinin and neuraminidase subunits of the influenza virus, and edible algae vaccines. A subunit vaccine is being used for plague immunization.
=== Conjugate ===
Certain bacteria have a polysaccharide outer coat that is poorly immunogenic. By linking these outer coats to proteins (e.g., toxins), the immune system can be led to recognize the polysaccharide as if it were a protein antigen. This approach is used in the Haemophilus influenzae type B vaccine.
=== Outer membrane vesicle ===
Outer membrane vesicles (OMVs) are naturally immunogenic and can be manipulated to produce potent vaccines. The best known OMV vaccines are those developed for serotype B meningococcal disease.
=== Heterotypic ===
Heterologous vaccines also known as "Jennerian vaccines", are vaccines that are pathogens of other animals that either do not cause disease or cause mild disease in the organism being treated. The classic example is Jenner's use of cowpox to protect against smallpox. A current example is the use of BCG vaccine made from Mycobacterium bovis to protect against tuberculosis.
=== Genetic vaccine ===
Genetic vaccines are based on the principle of uptake of a nucleic acid into cells, whereupon a protein is produced according to the nucleic acid template. This protein is usually the immunodominant antigen of the pathogen or a surface protein that enables the formation of neutralizing antibodies. The subgroup of genetic vaccines encompass viral vector vaccines, RNA vaccines and DNA vaccines.
==== Viral vector ====
Viral vector vaccines use a safe virus to insert pathogen genes in the body to produce specific antigens, such as surface proteins, to stimulate an immune response. Viruses being researched for use as viral vectors include adenovirus, vaccinia virus, and VSV.
==== RNA ====
An mRNA vaccine (or RNA vaccine) is a novel type of vaccine which is composed of the nucleic acid RNA, packaged within a vector such as lipid nanoparticles. Among the COVID-19 vaccines are a number of RNA vaccines to combat the COVID-19 pandemic and some have been approved or have received emergency use authorization in some countries. For example, the Pfizer-BioNTech vaccine and Moderna mRNA vaccine are approved for use in adults and children in the US.
==== DNA ====
A DNA vaccine uses a DNA plasmid (pDNA)) that encodes for an antigenic protein originating from the pathogen upon which the vaccine will be targeted. pDNA is inexpensive, stable, and relatively safe, making it an excellent option for vaccine delivery.
This approach offers a number of potential advantages over traditional approaches, including the stimulation of both B- and T-cell responses, improved vaccine stability, the absence of any infectious agent and the relative ease of large-scale manufacture.
=== Experimental ===
Many innovative vaccines are also in development and use.
Dendritic cell vaccines combine dendritic cells with antigens to present the antigens to the body's white blood cells, thus stimulating an immune reaction. These vaccines have shown some positive preliminary results for treating brain tumors and are also tested in malignant melanoma.
Recombinant vector – by combining the physiology of one microorganism and the DNA of another, immunity can be created against diseases that have complex infection processes. An example is the RVSV-ZEBOV vaccine licensed to Merck that is being used in 2018 to combat ebola in Congo.
T-cell receptor peptide vaccines are under development for several diseases using models of Valley Fever, stomatitis, and atopic dermatitis. These peptides have been shown to modulate cytokine production and improve cell-mediated immunity.
Targeting of identified bacterial proteins that are involved in complement inhibition would neutralize the key bacterial virulence mechanism.
The use of plasmids has been validated in preclinical studies as a protective vaccine strategy for cancer and infectious diseases. However, in human studies, this approach has failed to provide clinically relevant benefit. The overall efficacy of plasmid DNA immunization depends on increasing the plasmid's immunogenicity while also correcting for factors involved in the specific activation of immune effector cells.
Bacterial vector – Similar in principle to viral vector vaccines, but using bacteria instead.
Antigen-presenting cell
Technologies which may allow rapid vaccine deployment in response to a novel pathogen include the use of virus-like particles or protein nanoparticles.
Inverse vaccines are vaccines that train the immune system to not respond to certain substances.
While most vaccines are created using inactivated or attenuated compounds from microorganisms, synthetic vaccines are composed mainly or wholly of synthetic peptides, carbohydrates, or antigens.
== Valence ==
Vaccines may be monovalent (also called univalent) or multivalent (also called polyvalent). A monovalent vaccine is designed to immunize against a single antigen or single microorganism. A multivalent or polyvalent vaccine is designed to immunize against two or more strains of the same microorganism, or against two or more microorganisms. The valency of a multivalent vaccine may be denoted with a Greek or Latin prefix (e.g., bivalent, trivalent, or tetravalent/quadrivalent). In certain cases, a monovalent vaccine may be preferable for rapidly developing a strong immune response.
=== Interactions ===
When two or more vaccines are mixed in the same formulation, the two vaccines can interfere. This most frequently occurs with live attenuated vaccines, where one of the vaccine components is more robust than the others and suppresses the growth and immune response to the other components.
This phenomenon was noted in the trivalent Sabin polio vaccine, where the relative amount of serotype 2 virus in the vaccine had to be reduced to stop it from interfering with the "take" of the serotype 1 and 3 viruses in the vaccine. To accomplish this, the doses of serotypes 1 and 3 were increased in the vaccine in the early 1960s. It was also noted in a 2001 study to be a problem with dengue vaccines, where the DEN-3 serotype was found to predominate and suppress the response to DEN-1, -2 and -4 serotypes.
== Other contents ==
=== Adjuvants ===
Vaccines typically contain one or more adjuvants, used to boost the immune response. Tetanus toxoid, for instance, is usually adsorbed onto alum. This presents the antigen in such a way as to produce a greater action than the simple aqueous tetanus toxoid. People who have an adverse reaction to adsorbed tetanus toxoid may be given the simple vaccine when the time comes for a booster.
In the preparation for the 1990 Persian Gulf campaign, the whole cell pertussis vaccine was used as an adjuvant for anthrax vaccine. This produces a more rapid immune response than giving only the anthrax vaccine, which is of some benefit if exposure might be imminent.
=== Preservatives ===
Vaccines may also contain preservatives to prevent contamination with bacteria or fungi. Until recent years, the preservative thiomersal (a.k.a. Thimerosal in the US and Japan) was used in many vaccines that did not contain live viruses. As of 2005, the only childhood vaccine in the U.S. that contains thiomersal in greater than trace amounts is the influenza vaccine, which is currently recommended only for children with certain risk factors. Single-dose influenza vaccines supplied in the UK do not list thiomersal in the ingredients. Preservatives may be used at various stages of the production of vaccines, and the most sophisticated methods of measurement might detect traces of them in the finished product, as they may in the environment and population as a whole.
Many vaccines need preservatives to prevent serious adverse effects such as Staphylococcus infection, which in one 1928 incident killed 12 of 21 children inoculated with a diphtheria vaccine that lacked a preservative. Several preservatives are available, including thiomersal, phenoxyethanol, and formaldehyde. Thiomersal is more effective against bacteria, has a better shelf-life, and improves vaccine stability, potency, and safety; however, in the U.S., the European Union, and a few other affluent countries, it is no longer used as a preservative in childhood vaccines, as a precautionary measure due to its mercury content. Although controversial claims have been made that thiomersal contributes to autism, no convincing scientific evidence supports these claims. Furthermore, a 10–11-year study of 657,461 children found that the MMR vaccine does not cause autism and actually reduced the risk of autism by seven percent.
=== Excipients ===
Beside the active vaccine itself, the following excipients and residual manufacturing compounds are present or may be present in vaccine preparations:
Aluminum salts or gels are added as adjuvants. Adjuvants are added to promote an earlier, more potent response, and more persistent immune response to the vaccine; they allow for a lower vaccine dosage.
Antibiotics are added to some vaccines to prevent the growth of bacteria during production and storage of the vaccine.
Egg protein is present in the influenza vaccine and yellow fever vaccine as they are prepared using chicken eggs. Other proteins may be present.
Formaldehyde is used to inactivate bacterial products for toxoid vaccines. Formaldehyde is also used to inactivate unwanted viruses and kill bacteria that might contaminate the vaccine during production.
Monosodium glutamate (MSG) and 2-phenoxyethanol are used as stabilizers in a few vaccines to help the vaccine remain unchanged when the vaccine is exposed to heat, light, acidity, or humidity.
Thiomersal is a mercury-containing antimicrobial that is added to vials of vaccines that contain more than one dose to prevent contamination and growth of potentially harmful bacteria. Due to the controversy surrounding thiomersal, it has been removed from most vaccines except multi-use influenza, where it was reduced to levels so that a single dose contained less than a microgram of mercury, a level similar to eating ten grams of canned tuna.
== Nomenclature ==
Various fairly standardized abbreviations for vaccine names have developed, although the standardization is by no means centralized or global. For example, the vaccine names used in the United States have well-established abbreviations that are also widely known and used elsewhere. An extensive list of them provided in a sortable table and freely accessible is available at a US Centers for Disease Control and Prevention web page. The page explains that "The abbreviations [in] this table (Column 3) were standardized jointly by staff of the Centers for Disease Control and Prevention, ACIP Work Groups, the editor of the Morbidity and Mortality Weekly Report (MMWR), the editor of Epidemiology and Prevention of Vaccine-Preventable Diseases (the Pink Book), ACIP members, and liaison organizations to the ACIP."
Some examples are "DTaP" for diphtheria and tetanus toxoids and acellular pertussis vaccine, "DT" for diphtheria and tetanus toxoids, and "Td" for tetanus and diphtheria toxoids. At its page on tetanus vaccination, the CDC further explains that "Upper-case letters in these abbreviations denote full-strength doses of diphtheria (D) and tetanus (T) toxoids and pertussis (P) vaccine. Lower-case "d" and "p" denote reduced doses of diphtheria and pertussis used in the adolescent/adult-formulations. The 'a' in DTaP and Tdap stands for 'acellular', meaning that the pertussis component contains only a part of the pertussis organism."
Another list of established vaccine abbreviations is at the CDC's page called "Vaccine Acronyms and Abbreviations", with abbreviations used on U.S. immunization records. The United States Adopted Name system has some conventions for the word order of vaccine names, placing head nouns first and adjectives postpositively. This is why the USAN for "OPV" is "poliovirus vaccine live oral" rather than "oral poliovirus vaccine".
== Licensing ==
A vaccine licensure occurs after the successful conclusion of the development cycle and further the clinical trials and other programs involved through Phases I–III demonstrating safety, immunoactivity, immunogenetic safety at a given specific dose, proven effectiveness in preventing infection for target populations, and enduring preventive effect (time endurance or need for revaccination must be estimated). Because preventive vaccines are predominantly evaluated in healthy population cohorts and distributed among the general population, a high standard of safety is required. As part of a multinational licensing of a vaccine, the World Health Organization Expert Committee on Biological Standardization developed guidelines of international standards for manufacturing and quality control of vaccines, a process intended as a platform for national regulatory agencies to apply for their own licensing process. Vaccine manufacturers do not receive licensing until a complete clinical cycle of development and trials proves the vaccine is safe and has long-term effectiveness, following scientific review by a multinational or national regulatory organization, such as the European Medicines Agency (EMA) or the US Food and Drug Administration (FDA).
Upon developing countries adopting WHO guidelines for vaccine development and licensure, each country has its own responsibility to issue a national licensure, and to manage, deploy, and monitor the vaccine throughout its use in each nation. Building trust and acceptance of a licensed vaccine among the public is a task of communication by governments and healthcare personnel to ensure a vaccination campaign proceeds smoothly, saves lives, and enables economic recovery. When a vaccine is licensed, it will initially be in limited supply due to variable manufacturing, distribution, and logistical factors, requiring an allocation plan for the limited supply and which population segments should be prioritized to first receive the vaccine.
=== World Health Organization ===
Vaccines developed for multinational distribution via the United Nations Children's Fund (UNICEF) require pre-qualification by the WHO to ensure international standards of quality, safety, immunogenicity, and efficacy for adoption by numerous countries.
The process requires manufacturing consistency at WHO-contracted laboratories following Good Manufacturing Practice (GMP). When UN agencies are involved in vaccine licensure, individual nations collaborate by 1) issuing marketing authorization and a national license for the vaccine, its manufacturers, and distribution partners; and 2) conducting postmarketing surveillance, including records for adverse events after the vaccination program. The WHO works with national agencies to monitor inspections of manufacturing facilities and distributors for compliance with GMP and regulatory oversight.
Some countries choose to buy vaccines licensed by reputable national organizations, such as EMA, FDA, or national agencies in other affluent countries, but such purchases typically are more expensive and may not have distribution resources suitable to local conditions in developing countries.
=== European Union ===
In the European Union (EU), vaccines for pandemic pathogens, such as seasonal influenza, are licensed EU-wide where all the member states comply ("centralized"), are licensed for only some member states ("decentralized"), or are licensed on an individual national level. Generally, all EU states follow regulatory guidance and clinical programs defined by the European Committee for Medicinal Products for Human Use (CHMP), a scientific panel of the European Medicines Agency (EMA) responsible for vaccine licensure. The CHMP is supported by several expert groups who assess and monitor the progress of a vaccine before and after licensure and distribution.
=== United States ===
Under the FDA, the process of establishing evidence for vaccine clinical safety and efficacy is the same as for the approval process for prescription drugs. If successful through the stages of clinical development, the vaccine licensing process is followed by a Biologics License Application which must provide a scientific review team (from diverse disciplines, such as physicians, statisticians, microbiologists, chemists) and comprehensive documentation for the vaccine candidate having efficacy and safety throughout its development. Also during this stage, the proposed manufacturing facility is examined by expert reviewers for GMP compliance, and the label must have a compliant description to enable health care providers' definition of vaccine-specific use, including its possible risks, to communicate and deliver the vaccine to the public. After licensure, monitoring of the vaccine and its production, including periodic inspections for GMP compliance, continue as long as the manufacturer retains its license, which may include additional submissions to the FDA of tests for potency, safety, and purity for each vaccine manufacturing step.
=== India ===
In India, the Drugs Controller General, the head of department of the Central Drugs Standard Control Organization, India's national regulatory body for cosmetics, pharmaceuticals and medical devices, is responsible for the approval of licences for specified categories of drugs such as vaccines and other medicinal items, such as blood or blood products, IV fluids, and sera.
=== Postmarketing surveillance ===
Until a vaccine is in use amongst the general population, all potential adverse events from the vaccine may not be known, requiring manufacturers to conduct Phase IV studies for postmarketing surveillance of the vaccine while it is used widely in the public. The WHO works with UN member states to implement post-licensing surveillance. The FDA relies on a Vaccine Adverse Event Reporting System to monitor safety concerns about a vaccine throughout its use in the American public.
== Scheduling ==
In order to provide the best protection, children are recommended to receive vaccinations as soon as their immune systems are sufficiently developed to respond to particular vaccines, with additional "booster" shots often required to achieve "full immunity". This has led to the development of complex vaccination schedules. Global recommendations of vaccination schedule are issued by Strategic Advisory Group of Experts and will be further translated by advisory committee at the country level with considering of local factors such as disease epidemiology, acceptability of vaccination, equity in local populations, and programmatic and financial constraint. In the United States, the Advisory Committee on Immunization Practices, which recommends schedule additions for the Centers for Disease Control and Prevention, recommends routine vaccination of children against hepatitis A, hepatitis B, polio, mumps, measles, rubella, diphtheria, pertussis, tetanus, HiB, chickenpox, rotavirus, influenza, meningococcal disease and pneumonia.
The large number of vaccines and boosters recommended (up to 24 injections by age two) has led to problems with achieving full compliance. To combat declining compliance rates, various notification systems have been instituted and many combination injections are now marketed (e.g., Pentavalent vaccine and MMRV vaccine), which protect against multiple diseases.
Besides recommendations for infant vaccinations and boosters, many specific vaccines are recommended for other ages or for repeated injections throughout life – most commonly for measles, tetanus, influenza, and pneumonia. Pregnant women are often screened for continued resistance to rubella. The human papillomavirus vaccine is recommended in the U.S. (as of 2011) and UK (as of 2009). Vaccine recommendations for the elderly concentrate on pneumonia and influenza, which are more deadly to that group. In 2006, a vaccine was introduced against shingles, a disease caused by the chickenpox virus, which usually affects the elderly.
Scheduling and dosing of a vaccination may be tailored to the level of immunocompetence of an individual and to optimize population-wide deployment of a vaccine when its supply is limited, e.g. in the setting of a pandemic.
== Economics of development ==
One challenge in vaccine development is economic: Many of the diseases most demanding a vaccine, including HIV, malaria and tuberculosis, exist principally in poor countries. Pharmaceutical firms and biotechnology companies have little incentive to develop vaccines for these diseases because there is little revenue potential. Even in more affluent countries, financial returns are usually minimal and the financial and other risks are great.
Most vaccine development to date has relied on "push" funding by government, universities and non-profit organizations. Many vaccines have been highly cost effective and beneficial for public health. The number of vaccines actually administered has risen dramatically in recent decades. This increase, particularly in the number of different vaccines administered to children before entry into schools, may be due to government mandates and support, rather than economic incentive.
=== Patents ===
According to the World Health Organization (WHO), the biggest barrier to vaccine production in less developed countries has not been patents, but the substantial financial, infrastructure, and workforce requirements needed for market entry. Vaccines are complex mixtures of biological compounds, and unlike the case for prescription drugs, there are no true generic vaccines. The vaccine produced by a new facility must undergo complete clinical testing for safety and efficacy by the manufacturer. For most vaccines, specific processes in technology are patented. These can be circumvented by alternative manufacturing methods, but this required R&D infrastructure and a suitably skilled workforce. In the case of a few relatively new vaccines, such as the human papillomavirus vaccine, the patents may impose an additional barrier.
When increased production of vaccines was urgently needed during the COVID-19 pandemic in 2021, the World Trade Organization and governments around the world evaluated whether to waive intellectual property rights and patents on COVID-19 vaccines, which would "eliminate all potential barriers to the timely access of affordable COVID-19 medical products, including vaccines and medicines, and scale up the manufacturing and supply of essential medical products".
== Production ==
Vaccine production is fundamentally different from other kinds of manufacturing – including regular pharmaceutical manufacturing – in that vaccines are intended to be administered to millions of people of whom the vast majority are perfectly healthy. This fact drives an extraordinarily rigorous production process with strict compliance requirements that go far beyond what is required of other products.
Depending upon the antigen, it can cost anywhere from US$50 to $500 million to build a vaccine production facility, which requires highly specialized equipment, clean rooms, and containment rooms. There is a global scarcity of personnel with the right combination of skills, expertise, knowledge, competence and personality to staff vaccine production lines. With the notable exceptions of Brazil, China, and India, many developing countries' educational systems are unable to provide enough qualified candidates, and vaccine makers based in such countries must hire expatriate personnel to keep production going.
Vaccine production has several stages. First, the antigen itself is generated. Viruses are grown either on primary cells such as chicken eggs (e.g., for influenza) or on continuous cell lines such as cultured human cells (e.g., for hepatitis A). Bacteria are grown in bioreactors (e.g., Haemophilus influenzae type b). Likewise, a recombinant protein derived from the viruses or bacteria can be generated in yeast, bacteria, or cell cultures.
After the antigen is generated, it is isolated from the cells used to generate it. A virus may need to be inactivated, possibly with no further purification required. Recombinant proteins need many operations involving ultrafiltration and column chromatography. Finally, the vaccine is formulated by adding adjuvant, stabilizers, and preservatives as needed. The adjuvant enhances the immune response to the antigen, stabilizers increase the storage life, and preservatives allow the use of multidose vials. Combination vaccines are harder to develop and produce, because of potential incompatibilities and interactions among the antigens and other ingredients involved.
The final stage in vaccine manufacture before distribution is fill and finish, which is the process of filling vials with vaccines and packaging them for distribution. Although this is a conceptually simple part of the vaccine manufacture process, it is often a bottleneck in the process of distributing and administering vaccines.
Vaccine production techniques are evolving. Cultured mammalian cells are expected to become increasingly important, compared to conventional options such as chicken eggs, due to greater productivity and low incidence of problems with contamination. Recombination technology that produces genetically detoxified vaccines is expected to grow in popularity for the production of bacterial vaccines that use toxoids. Combination vaccines are expected to reduce the quantities of antigens they contain, and thereby decrease undesirable interactions, by using pathogen-associated molecular patterns.
=== Vaccine manufacturers ===
The companies with the highest market share in vaccine production are Merck, Sanofi, GlaxoSmithKline, Pfizer and Novartis, with 70% of vaccine sales concentrated in the EU or US (2013).: 42 Vaccine manufacturing plants require large capital investments ($50 million up to $300 million) and may take between 4 and 6 years to construct, with the full process of vaccine development taking between 10 and 15 years.: 43 Manufacturing in developing countries is playing an increasing role in supplying these countries, specifically with regards to older vaccines and in Brazil, India and China.: 47 The manufacturers in India are the most advanced in the developing world and include the Serum Institute of India, one of the largest producers of vaccines by number of doses and an innovator in processes, recently improving efficiency of producing the measles vaccine by 10 to 20-fold, due to switching to a MRC-5 cell culture instead of chicken eggs.: 48 China's manufacturing capabilities are focused on supplying their own domestic need, with Sinopharm (CNPGC) alone providing over 85% of the doses for 14 different vaccines in China.: 48 Brazil is approaching the point of supplying its own domestic needs using technology transferred from the developed world.: 49
== Delivery systems ==
One of the most common methods of delivering vaccines into the human body is injection.
The development of new delivery systems raises the hope of vaccines that are safer and more efficient to deliver and administer. Lines of research include liposomes and ISCOM (immune stimulating complex).
Notable developments in vaccine delivery technologies have included oral vaccines. Early attempts to apply oral vaccines showed varying degrees of promise, beginning early in the 20th century, at a time when the very possibility of an effective oral antibacterial vaccine was controversial. By the 1930s there was increasing interest in the prophylactic value of an oral typhoid fever vaccine for example.
An oral polio vaccine turned out to be effective when vaccinations were administered by volunteer staff without formal training; the results also demonstrated increased ease and efficiency of administering the vaccines. Effective oral vaccines have many advantages; for example, there is no risk of blood contamination. Vaccines intended for oral administration need not be liquid, and as solids, they commonly are more stable and less prone to damage or spoilage by freezing in transport and storage. Such stability reduces the need for a "cold chain": the resources required to keep vaccines within a restricted temperature range from the manufacturing stage to the point of administration, which, in turn, may decrease costs of vaccines.
A microneedle approach, which is still in stages of development, uses "pointed projections fabricated into arrays that can create vaccine delivery pathways through the skin".
An experimental needle-free vaccine delivery system is undergoing animal testing. A stamp-size patch similar to an adhesive bandage contains about 20,000 microscopic projections per square cm. This dermal administration potentially increases the effectiveness of vaccination, while requiring less vaccine than injection.
== In veterinary medicine ==
Vaccinations of animals are used both to prevent their contracting diseases and to prevent transmission of disease to humans. Both animals kept as pets and animals raised as livestock are routinely vaccinated. In some instances, wild populations may be vaccinated. This is sometimes accomplished with vaccine-laced food spread in a disease-prone area and has been used to attempt to control rabies in raccoons.
Where rabies occurs, rabies vaccination of dogs may be required by law. Other canine vaccines include canine distemper, canine parvovirus, infectious canine hepatitis, adenovirus-2, leptospirosis, Bordetella, canine parainfluenza virus, and Lyme disease, among others.
Cases of veterinary vaccines used in humans have been documented, whether intentional or accidental, with some cases of resultant illness, most notably with brucellosis. However, the reporting of such cases is rare and very little has been studied about the safety and results of such practices. With the advent of aerosol vaccination in veterinary clinics, human exposure to pathogens not naturally carried in humans, such as Bordetella bronchiseptica, has likely increased in recent years. In some cases, most notably rabies, the parallel veterinary vaccine against a pathogen may be as much as orders of magnitude more economical than the human one.
=== DIVA vaccines ===
DIVA (Differentiation of Infected from Vaccinated Animals), also known as SIVA (Segregation of Infected from Vaccinated Animals) vaccines, make it possible to differentiate between infected and vaccinated animals. DIVA vaccines carry at least one epitope less than the equivalent wild microorganism. An accompanying diagnostic test that detects the antibody against that epitope assists in identifying whether the animal has been vaccinated or not.
The first DIVA vaccines (formerly termed marker vaccines and since 1999 coined as DIVA vaccines) and companion diagnostic tests were developed by J. T. van Oirschot and colleagues at the Central Veterinary Institute in Lelystad, The Netherlands. They found that some existing vaccines against pseudorabies (also termed Aujeszky's disease) had deletions in their viral genome (among which was the gE gene). Monoclonal antibodies were produced against that deletion and selected to develop an ELISA that demonstrated antibodies against gE. In addition, novel genetically engineered gE-negative vaccines were constructed. Along the same lines, DIVA vaccines and companion diagnostic tests against bovine herpesvirus 1 infections have been developed.
The DIVA strategy has been applied in various countries to successfully eradicate pseudorabies virus from those countries. Swine populations were intensively vaccinated and monitored by the companion diagnostic test and, subsequently, the infected pigs were removed from the population. Bovine herpesvirus 1 DIVA vaccines are also widely used in practice. Considerable efforts are ongoing to apply the DIVA principle to a wide range of infectious diseases, such as classical swine fever, avian influenza, Actinobacillus pleuropneumonia and Salmonella infections in pigs.
== History ==
Prior to the introduction of vaccination with material from cases of cowpox (heterotypic immunisation), smallpox could be prevented by deliberate variolation with smallpox virus. According to historian Joseph Needham, Taoists in China as far back as the 10th century practiced a form of inoculation and passed it down through oral tradition, though Needham's claim has been criticized since the practice was not written about. The Chinese also practiced the oldest documented use of variolation, dating back to the fifteenth century. They implemented a method of "nasal insufflation" administered by blowing powdered smallpox material, usually scabs, up the nostrils. Various insufflation techniques have been recorded throughout the sixteenth and seventeenth centuries within China.: 60 Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. In France, Voltaire reports that the Chinese have practiced variolation "these hundred years".
Mary Wortley Montagu, who had witnessed variolation in Turkey, had her four-year-old daughter variolated in the presence of physicians of the Royal Court in 1721 upon her return to England. Later on that year, Charles Maitland conducted an experimental variolation of six prisoners in Newgate Prison in London. The experiment was a success, and soon variolation was drawing attention from the royal family, who helped promote the procedure. However, in 1783, several days after Prince Octavius of Great Britain was inoculated, he died.
In 1796, the physician Edward Jenner took pus from the hand of a milkmaid with cowpox, scratched it into the arm of an 8-year-old boy, James Phipps, and six weeks later variolated the boy with smallpox, afterwards observing that he did not catch smallpox. Jenner extended his studies and, in 1798, reported that his vaccine was safe in children and adults, and could be transferred from arm-to-arm, which reduced reliance on uncertain supplies from infected cows. In 1804, the Spanish Balmis smallpox vaccination expedition to Spain's colonies Mexico and Philippines used the arm-to-arm transport method to get around the fact the vaccine survived for only 12 days in vitro. They used cowpox. Since vaccination with cowpox was much safer than smallpox inoculation, the latter, though still widely practiced in England, was banned in 1840.
Following on from Jenner's work, the second generation of vaccines was introduced in the 1880s by Louis Pasteur who developed vaccines for chicken cholera and anthrax, and from the late nineteenth century vaccines were considered a matter of national prestige. National vaccination policies were adopted and compulsory vaccination laws were passed. In 1931 Alice Miles Woodruff and Ernest Goodpasture documented that the fowlpox virus could be grown in embryonated chicken egg. Soon scientists began cultivating other viruses in eggs. Eggs were used for virus propagation in the development of a yellow fever vaccine in 1935 and an influenza vaccine in 1945. In 1959 growth media and cell culture replaced eggs as the standard method of virus propagation for vaccines.
Vaccinology flourished in the twentieth century, which saw the introduction of several successful vaccines, including those against diphtheria, measles, mumps, and rubella. Major achievements included the development of the polio vaccine in the 1950s and the eradication of smallpox during the 1960s and 1970s. Maurice Hilleman was the most prolific of the developers of the vaccines in the twentieth century. As vaccines became more common, many people began taking them for granted. However, vaccines remain elusive for many important diseases, including herpes simplex, malaria, gonorrhea, and HIV.
=== Generations of vaccines ===
First generation vaccines are whole-organism vaccines – either live and weakened, or killed forms. Live, attenuated vaccines, such as smallpox and polio vaccines, are able to induce killer T-cell (TC or CTL) responses, helper T-cell (TH) responses and antibody immunity. However, attenuated forms of a pathogen can convert to a dangerous form and may cause disease in immunocompromised vaccine recipients (such as those with AIDS). While killed vaccines do not have this risk, they cannot generate specific killer T-cell responses and may not work at all for some diseases.
Second generation vaccines were developed to reduce the risks from live vaccines. These are subunit vaccines, consisting of specific protein antigens (such as tetanus or diphtheria toxoid) or recombinant protein components (such as the hepatitis B surface antigen). They can generate TH and antibody responses, but not killer T cell responses.
RNA vaccines and DNA vaccines are examples of third generation vaccines. In 2016 a DNA vaccine for the Zika virus began testing at the National Institutes of Health. Separately, Inovio Pharmaceuticals and GeneOne Life Science began tests of a different DNA vaccine against Zika in Miami. Manufacturing the vaccines in volume was unsolved as of 2016. Clinical trials for DNA vaccines to prevent HIV are underway. mRNA vaccines such as BNT162b2 were developed in the year 2020 with the help of Operation Warp Speed and massively deployed to combat the COVID-19 pandemic. In 2021, Katalin Karikó and Drew Weissman received Columbia University's Horwitz Prize for their pioneering research in mRNA vaccine technology.
== Trends ==
Since at least 2013, scientists have been trying to develop synthetic third-generation vaccines by reconstructing the outside structure of a virus; it was hoped that this will help prevent vaccine resistance.
Principles that govern the immune response can now be used in tailor-made vaccines against many noninfectious human diseases, such as cancers and autoimmune disorders. For example, the experimental vaccine CYT006-AngQb has been investigated as a possible treatment for high blood pressure. Factors that affect the trends of vaccine development include progress in translatory medicine, demographics, regulatory science, political, cultural, and social responses.
=== Plants as bioreactors for vaccine production ===
The idea of vaccine production via transgenic plants was identified as early as 2003. Plants such as tobacco, potato, tomato, and banana can have genes inserted that cause them to produce vaccines usable for humans. In 2005, bananas were developed that produce a human vaccine against hepatitis B.
== Vaccine hesitancy ==
Vaccine hesitancy is a delay in acceptance, or refusal of vaccines despite the availability of vaccine services. The term covers outright refusals to vaccinate, delaying vaccines, accepting vaccines but remaining uncertain about their use, or using certain vaccines but not others. There is an overwhelming scientific consensus that vaccines are generally safe and effective. Vaccine hesitancy often results in disease outbreaks and deaths from vaccine-preventable diseases. The World Health Organization therefore characterized vaccine hesitancy as one of the top ten global health threats in 2019.
== References ==
== Further reading ==
Hall E, Wodi AP, Hamborsky J, Morelli V, Schillie S, eds. (2021). Epidemiology and Prevention of Vaccine-Preventable Diseases (14th ed.). Washington D.C.: U.S. Centers for Disease Control and Prevention (CDC).
== External links ==
Immunization, vaccine preventable diseases and polio transition World Health Organization
WHO Vaccine Position Papers World Health Organization
The History of Vaccines, from the College of Physicians of Philadelphia
This website was highlighted by Genetic Engineering & Biotechnology News in its "Best of the Web" section in January 2015. See: "The History of Vaccines". Best of the Web. Genetic Engineering & Biotechnology News. Vol. 35, no. 2. 15 January 2015. p. 38. | Wikipedia/Vaccine |
Substrate is used in a converting process such as printing or coating to generally describe the base material onto which, e.g. images, will be printed. Base materials may include:
plastic films or foils,
release liner
textiles,
plastic containers
any variety of paper (lightweight, heavyweight, coated, uncoated, paperboard, cardboard, etc.), or
parchment.
== Electronics ==
Printing processes such as silk-screening and photolithography are used in electronics to produce printed circuit boards and integrated circuits. Some common substrates used are;
Glass-reinforced epoxy, eg FR-4 board
Ceramic-PTFE laminate, eg 6010 board
Alumina ceramic
Silicon
Gallium arsenide
Sapphire
Quartz
== References ==
== Bibliography ==
Rogers, John WM; Plett, Calvin, Radio Frequency Integrated Circuit Design, Artech House, 2010 ISBN 1-60783-980-6. | Wikipedia/Substrate_(printing) |
The universal integrated circuit card (UICC) is the physical smart card (integrated circuit card) used in mobile terminals in 2G (GSM), 3G (UMTS), 4G (LTE), and 5G networks. The UICC ensures the integrity and security of all kinds of personal data, and it typically holds a few hundred kilobytes.
The official definition for UICC is found in ETSI TR 102 216, where it is defined as a "smart card that conforms to the specifications written and maintained by the ETSI Smart Card Platform project". In addition, the definition has a note that states that "UICC is neither an abbreviation nor an acronym".
NIST SP 800-101 Rev. 1 and NIST Computer Security Resource Center Glossary state that, "A UICC may be referred to as a SIM, USIM, RUIM or CSIM, and is used interchangeably with those terms", though this is an over-simplification. The primary component of a UICC is a SIM card.
== Design ==
A UICC consists of a CPU, ROM, RAM, EEPROM and I/O circuits. Early versions consisted of the whole full-size (85 × 54 mm, ISO/IEC 7810 ID-1) smart card. Soon the race for smaller telephones called for a smaller version of the card. The card was cropped down to 25 × 15 mm (ISO/IEC 7810 ID-000), as illustrated.
== 2G versus 3G ==
In 2G networks, the SIM card and SIM application were bound together, so that "SIM card" could mean the physical card, or any physical card with the SIM application.
In a GSM network, the UICC contains a SIM application and in a UMTS network, it contains a USIM application. A UICC may contain several applications, making it possible for the same smart card to give access to both GSM and UMTS networks, and also provide storage of a phone book and other applications. It is also possible to access a GSM network using a USIM application and it is possible to access UMTS networks using a SIM application with mobile terminals prepared for this. With the UMTS release 5 a new application, the IP multimedia Services Identity Module (ISIM) is required for services in the IMS. The telephone book is a separate application and not part of either subscriber identity module.
In a cdmaOne/CDMA2000 ("CDMA") network, the UICC contains a CSIM application, in addition to 3GPP USIM and SIM applications. A card with all 3 features is called a removable user identity card, or R-UIM. Thus, the R-UIM card can be inserted into CDMA, GSM, or UMTS handsets, and will work in all three cases.
In 3G networks, it is a mistake to speak of a USIM, CSIM, or SIM card, as all three are applications running on a UICC card.
== Usage ==
Since the card slot is standardized, a subscriber can easily move their wireless account and phone number from one handset to another. This will also transfer their phone book and text messages. Similarly, usually a subscriber can change carriers by inserting a new carrier's UICC card into their existing handset. However, it is not always possible because some carriers (e.g., in U.S.) SIM-lock the phones that they sell, preventing rival carriers' cards from being used.
The use and content of the card can be protected by use of PIN codes. One code, PIN1, can be defined to control normal use of the phone. Another code, PIN2, can be set, to allow the use of special functions (like limiting outbound telephone calls to a list of numbers). PUK1 and PUK2 is used to reset PIN1 and PIN2 respectively.
The integration of the ETSI framework and the Application management framework of GlobalPlatform is standardized in the UICC configuration.
== References == | Wikipedia/Universal_Integrated_Circuit_Card |
Functional verification is the task of verifying that the logic design conforms to specification. Functional verification attempts to answer the question "Does this proposed design do what is intended?" This is complex and takes the majority of time and effort (up to 70% of design and development time) in most large electronic system design projects. Functional verification is a part of more encompassing design verification, which, besides functional verification, considers non-functional aspects like timing, layout and power.
== Background ==
Although the number of transistors increased exponentially according to Moore's law, increasing the number of engineers and time taken to produce the designs only increase linearly. As the transistors' complexity increases, the number of coding errors also increases. Most of the errors in logic coding come from careless coding (12.7%), miscommunication (11.4%), and microarchitecture challenges (9.3%). Thus, electronic design automation (EDA) tools are produced to catch up with the complexity of transistors design. Languages such as Verilog and VHDL are introduced together with the EDA tools.
Functional verification is very difficult because of the sheer volume of possible test-cases that exist in even a simple design. Frequently there are more than 10^80 possible tests to comprehensively verify a design – a number that is impossible to achieve in a lifetime. This effort is equivalent to program verification, and is NP-hard or even worse – and no solution has been found that works well in all cases. However, it can be attacked by many methods. None of them are perfect, but each can be helpful in certain circumstances:
Logic simulation simulates the logic before it is built.
Simulation acceleration applies special purpose hardware to the logic simulation problem.
Emulation builds a version of system using programmable logic. This is expensive, and still much slower than the real hardware, but orders of magnitude faster than simulation. It can be used, for example, to boot the operating system on a processor.
Formal verification attempts to prove mathematically that certain requirements (also expressed formally) are met, or that certain undesired behaviors (such as deadlock) cannot occur.
Intelligent verification uses automation to adapt the testbench to changes in the register transfer level code.
HDL-specific versions of lint, and other heuristics, are used to find common problems.
== Types ==
There are three types of functional verification, namely: dynamic functional, hybrid dynamic functional/static, and static verification.
Simulation based verification (also called 'dynamic verification') is widely used to "simulate" the design, since this method scales up very easily. Stimulus is provided to exercise each line in the HDL code. A test-bench is built to functionally verify the design by providing meaningful scenarios to check that given certain input, the design performs to specification.
A simulation environment is typically composed of several types of components:
The generator generates input vectors that are used to search for anomalies that exist between the intent (specifications) and the implementation (HDL Code). This type of generator utilizes an NP-complete type of SAT Solver that can be computationally expensive. Other types of generators include manually created vectors, Graph-Based generators (GBMs) proprietary generators. Modern generators create directed-random and random stimuli that are statistically driven to verify random parts of the design. The randomness is important to achieve a high distribution over the huge space of the available input stimuli. To this end, users of these generators intentionally under-specify the requirements for the generated tests. It is the role of the generator to randomly fill this gap. This mechanism allows the generator to create inputs that reveal bugs not being searched for directly by the user. Generators also bias the stimuli toward design corner cases to further stress the logic. Biasing and randomness serve different goals and there are tradeoffs between them, hence different generators have a different mix of these characteristics. Since the input for the design must be valid (legal) and many targets (such as biasing) should be maintained, many generators use the constraint satisfaction problem (CSP) technique to solve the complex testing requirements. The legality of the design inputs and the biasing arsenal are modeled. The model-based generators use this model to produce the correct stimuli for the target design.
The drivers translate the stimuli produced by the generator into the actual inputs for the design under verification. Generators create inputs at a high level of abstraction, namely, as transactions or assembly language. The drivers convert this input into actual design inputs as defined in the specification of the design's interface.
The simulator produces the outputs of the design, based on the design's current state (the state of the flip-flops) and the injected inputs. The simulator has a description of the design net-list. This description is created by synthesizing the HDL to a low gate level net-list.
The monitor converts the state of the design and its outputs to a transaction abstraction level so it can be stored in a 'score-boards' database to be checked later on.
The checker validates that the contents of the 'score-boards' are legal. There are cases where the generator creates expected results, in addition to the inputs. In these cases, the checker must validate that the actual results match the expected ones.
The arbitration manager manages all the above components together.
Different coverage metrics are defined to assess that the design has been adequately exercised. These include functional coverage (has every functionality of the design been exercised?), statement coverage (has each line of HDL been exercised?), and branch coverage (has each direction of every branch been exercised?).
== See also ==
Analog verification
Cleanroom software engineering
High-level verification
== References == | Wikipedia/Functional_verification |
Integrated passive devices (IPDs), also known as integrated passive components (IPCs) or embedded passive components (EPC), are electronic components where resistors (R), capacitors (C), inductors (L)/coils/chokes, microstriplines, impedance matching elements, baluns or any combinations of them are integrated in the same package or on the same substrate. Sometimes integrated passives can also be called as embedded passives, and still the difference between integrated and embedded passives is technically unclear. In both cases passives are realized in between dielectric layers or on the same substrate.
The earliest form of IPDs are resistor, capacitor, resistor-capacitor (RC) or resistor-capacitor-coil/inductor (RCL) networks. Passive transformers can also be realised as integrated passive devices like by putting two coils on top of each other separated by a thin dielectric layer. Sometimes diodes (PN, PIN, zener etc.) can be integrated on the same substrate with integrated passives specifically if the substrate is silicon or some other semiconductor like gallium arsenide (GaAs).
== Description ==
Integrated passive devices can be packaged, bare dies/chips or even stacked (assembled on top of some other bare die/chip) in a third dimension (3D) with active integrated circuits or other IPDs in an electronic system assembly. Typical packages for integrated passives are SIL (Standard In Line), SIP or any other packages (like DIL, DIP, QFN, chip-scale package/CSP, wafer level package/WLP etc.) used in electronic packaging. Integrated passives can also act as a module substrate, and therefore be part of a hybrid module, multi-chip module or chiplet module/implementation.
The substrate for IPDs can be rigid like ceramic (aluminumoxide/alumina), layered ceramic (low temperature co-fired ceramic/LTCC, high temperature co-fired ceramic/HTCC), glass, and silicon coated with some dielectric layer like silicon dioxide. The substrate can be also flexible like laminate e. g. a package interposer (called as an active interposer), FR4 or similar, Kapton or any other suitable polyimide. It is beneficial for the electronics system design if the effect of the substrate and the possible package to the performance of IPDs can be neglected or known.
Manufacturing of IPDs used include thick and thin film technologies and variety of integrated circuit processing steps or modifications (like thicker or different metals than aluminum or copper) of them. Integrated passives are available as standard components/parts or as custom designed (for a specific application) devices.
== Applications ==
Integrated passive devices are mainly used as standard parts or custom designed due to
needs to reduce number of parts to be assembled in an electronic system resulting minimized logistics needed.
needs to miniaturize (area and height) electronics like for medical (hearing aid equipment), wearable (watches, intelligent rings, wearable heart rate monitors) and portable use (mobile phones, tablets etc.). Striplines, baluns etc can be miniaturized with IPDs with smaller tolerances in radio frequency (RF) parts of the system specifically if thin film technology is used. IPD chips can be stacked with active or other integrated passive chips if ultimate miniaturisation is the target.
needs to reduce weight of electronic assemblies for example in space, aerospace or in unmanned aerial vehicles (UAVs like drones) applications
electronic designs, which require numerous passives with the same value like several one nanofarad (1 nF) capacitors. This may happen in implementations where integrated circuits (ICs) with a high input/output count are needed/used. Many high speed signals or power supply lines may need stabilization by capacitors. Emergence of digital implementations leads to use digital parallel lines (4-, 8-, 16-, 32-, 64-bit etc.) and stabilization of all signal lines resulting capacitor islands in the implementation. Miniaturization of those may result to use integrated capacitor networks or arrays of capacitors. They may also be implemented as part (embedded) of an integrated circuit package like BGA or CSP (chip scale package) substrate or interposer of packages.
electronic designs which require numerous electromagnetic interference (EMI) or electrostatic discharge (ESD) suppression functionality like designs with high input/output pin count connectors in interfaces. EMI or ESD suppression typically is realized with RC or R(C)-diode networks.
limitations of performance (like Q factors of the coils) and values (like large capacitance values) of passive elements available in integrated circuit technologies like CMOS as monolithically integrated with active elements (transistors etc.). If size (area or thickness) and/or weight of electronics assembly need to be minimized and standard parts are not available, custom IPDs might be the only option towards smallest number of parts, small size or weight of electronics.
improved reliability if interfaces between different technologies (monolithic, packaging, electronics and optics/photonics, assemblies like surface mount technology and integrated circuits etc.) needs to be minimised.
timing in some applications, if for example there are critical needs for fast and very precise filtering (R(L)C etc.) — and SMD discrete part based solution is not fast enough or not predictable enough.
The challenge of custom IPDs compared to standard integrated or discrete passives however is the availability time for the assembly and sometimes also the performance. Depending on the manufacturing technology of integrated passives high capacitance or resistor values with a required tolerance may be hard to meet. Q value of coils/inductors might also be limited by the thickness of the metals available in the implementation. However new materials and improved manufacturing techniques like atomic layer deposition (ALD) and understanding manufacturing and control of thick metal alloys on large substrates improve capacitance density and Q value of coils/inductors.
Therefore in prototyping and small/medium size production phase standard parts/passives are in many cases the fastest way to the realization. Custom designed passives can be considered to be used after careful technical and economical analysis in volume manufacturing, if time-to-market and cost targets of the product(s) can be met. Therefore integrated passive devices are continuously technically and economically challenged by decreasing size, improving tolerances, improving accuracy of assembly techniques (like SMT, surface-mount technology) of system motherboards and cost of discrete/separate passive devices. Going forward discrete and integrated passives will complement each other technically. Development and understanding of new materials and assembly techniques are a key enabler for both integrated and discrete passive devices.
== Fabrication ==
=== IPDs on a silicon substrate ===
IPDs on a silicon substrate are generally fabricated using standard wafer fabrication technologies such as thin film and photolithography processing. For avoiding possible parasitic effects due to semiconductive silicon high resistive silicon substrate is typically used for integrated passives. IPDs on silicon can be designed as flip chip mountable or wire bondable components. However to differentiate technically from active integrated circuit (IC) technologies IPD technologies may utilise thicker metal (for higher Q value of inductors) or different resistive (like SiCr) layers, thinner or different higher K (higher dielectric constant) dielectric layers (like PZT instead of silicon dioxide or silicon nitride) for higher capacitance density than with typical IC technologies.
IPDs on silicon can be grinded — if needed — below 100 μm in thickness and with many packaging options (micro-bumping, wire bonding, copper pads) and delivery mode options (as wafers, bare dies, tape & reel).
3D passive integration in silicon is one of the technologies used to manufacture Integrated Passive Devices (IPDs), enabling high-density trench capacitors, metal-insulator-metal (MIM) capacitors, resistors, high-Q inductors, PIN, Schottky or Zener diodes to be implemented in silicon. The design time of IPDs on silicon depends on complexity of the design but can be made by using same design tools and environment what is used for application specific integrated circuits (ASICs) or integrated circuits. Some IPD suppliers offer full design kit support so that System in Package (SiP) module manufacturers or system houses are able to design their own IPDs fulfilling their specific application requirements.
== History ==
In early control system design it was discovered that having same value of components makes design easier and faster. One way to implement passive components with same value or in practice with smallest possible distribution is to place them on the same substrate near to each other.
Earliest form of integrated passive devices were resistor networks in the 1960s when four to eight resistors were packaged in form of Single-in-line package (SIP) by Vishay Intertechnology. Many other type of packages like DILs, DIPs etc. used in packaging integrated circuits even customised packages are used for integrated passive devices. Resistor, capacitor, and resistor capacitor networks are still widely used in systems even though monolithic integration has progressed.
Today portable electronic systems include roughly 2–40 discrete passive devices/integrated circuit or module. This shows that monolithic or module integration is not capable to include all functionality based on passive components in system realisations, and variety of technologies is needed to minimize logistics and system size. This is the application area for IPDs. Most — by number — of the passives in electronic systems are typically capacitors followed by number of resistors and inductors/coils.
Many functional blocks such as impedance matching circuits, harmonic filters, couplers and baluns and power combiner/divider can be realized by IPDs technology. IPDs are generally fabricated using thin, thick film and wafer fabrication technologies such as photolithography processing or typical ceramic technologies (LTCC and HTCC). IPDs can be designed as flip chip mountable or wire bondable components.
Trends towards applications with small size, portability and wireless connectivity have stretched various implementation technologies to be able to realize passive components. In 2021, there were 25 - 30 companies delivering integrated passive (including simple passive networks and passives on various substrates like glass, silicon and alumina) devices worldwide.
== See also ==
Electronic component#Passive components, for discrete devices
Surface-mount technology
Integrated circuit
== References ==
== External links ==
Integrated Passives in short 2017
Integrated passive technologies
Integrated passives in SIPs
Database of passive manufacturers world-wide. Search 'network' for passive networks
Integration of Passive Components in Thin-Film Multilayer at FhG
Integration of Passives with layered ceramics
Integrated Passive Devices, Electronics conference, 2012, HITEC
ST Integrated passive devices foundry
Integrated passives devices for RF applications
IPD technology from STATS chipPAC Ltd.
IPD technology from ASE Group
IPDs from Analog Devices
IPDs from On Semiconductor
Integrated passives on silicon from Murata including IPDIA
Capacitors embedded in interposer laminate by TDK
Integrated passives from Johanson Technology
Assessing cost effectiveness of integrated passives
Passive integration studies at Georgia Tech, US
Example of cost analysis of integrated/embedded passives
Example of 3D integrated capacitors manufacturing and performance
High density capacitor technology from Smoltek | Wikipedia/Integrated_passive_devices |
The MOSFET (metal–oxide–semiconductor field-effect transistor) is a type of insulated-gate field-effect transistor (IGFET) that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the covered gate determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals.
The MOSFET is the basic building block of most modern electronics, and the most frequently manufactured device in history, with an estimated total of 13 sextillion (1.3 × 1022) MOSFETs manufactured between 1960 and 2018. It is the most common semiconductor device in digital and analog circuits, and the most common power device. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. MOSFET scaling and miniaturization has been driving the rapid exponential growth of electronic semiconductor technology since the 1960s, and enable high-density integrated circuits (ICs) such as memory chips and microprocessors.
MOSFETs in integrated circuits are the primary elements of computer processors, semiconductor memory, image sensors, and most other types of integrated circuits. Discrete MOSFET devices are widely used in applications such as switch mode power supplies, variable-frequency drives, and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement, and home and automobile sound systems.
== Integrated circuits ==
The MOSFET, invented by a Bell Labs team under Mohamed Atalla and Dawon Kahng between 1959 and 1960, is the most widely used type of transistor and the most critical device component in integrated circuit (IC) chips. Planar process, developed by Jean Hoerni at Fairchild Semiconductor in early 1959, was also critical to the invention of the monolithic integrated circuit chip by Robert Noyce later in 1959. This was followed by the development of clean rooms to reduce contamination to levels never before thought necessary, and coincided with the development of photolithography which, along with surface passivation and the planar process, allowed circuits to be made in few steps.
Atalla realised that the main advantage of a MOS transistor was its ease of fabrication, particularly suiting it for use in the recently invented integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was re-iterated by Dawon Kahng in 1961. The Si–SiO2 system possessed the technical attractions of low cost of production (on a per circuit basis) and ease of integration. These two factors, along with its rapidly scaling miniaturization and low energy consumption, led to the MOSFET becoming the most widely used type of transistor in IC chips.
The earliest experimental MOS IC to be demonstrated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuits in 1964, consisting of 120 p-channel transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. In 1967, Bell Labs researchers Robert Kerwin, Donald Klein and John Sarace developed the self-aligned gate (silicon-gate) MOS transistor, which Fairchild Semiconductor researchers Federico Faggin and Tom Klein used to develop the first silicon-gate MOS IC.
=== Chips ===
There are various different types of MOS IC chips, which include the following.
=== Large-scale integration ===
With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density IC chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of MOSFETs on a chip by the late 1960s. MOS technology enabled the integration of more than 10,000 transistors on a single LSI chip by the early 1970s, before later enabling very large-scale integration (VLSI).
=== Microprocessors ===
The MOSFET is the basis of every microprocessor, and was responsible for the invention of the microprocessor. The origins of both the microprocessor and the microcontroller can be traced back to the invention and development of MOS technology. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.
The earliest microprocessors were all MOS chips, built with MOS LSI circuits. The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first commercial single-chip microprocessor, the Intel 4004, was developed by Federico Faggin, using his silicon-gate MOS IC technology, with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. With the arrival of CMOS microprocessors in 1975, the term "MOS microprocessors" began to refer to chips fabricated entirely from PMOS logic or fabricated entirely from NMOS logic, contrasted with "CMOS microprocessors" and "bipolar bit-slice processors".
== CMOS circuits ==
Complementary metal–oxide–semiconductor (CMOS) logic was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. CMOS had lower power consumption, but was initially slower than NMOS, which was more widely used for computers in the 1970s. In 1978, Hitachi introduced the twin-well CMOS process, which allowed CMOS to match the performance of NMOS with less power consumption. The twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computers in the 1980s. By the 1980s CMOS logic consumed over 7 times less power than NMOS logic, and about 100,000 times less power than bipolar transistor-transistor logic (TTL).
=== Digital ===
The growth of digital technologies like the microprocessor has provided the motivation to advance MOSFET technology faster than any other type of silicon-based transistor. A big advantage of MOSFETs for digital switching is that the oxide layer between the gate and the channel prevents DC current from flowing through the gate, further reducing power consumption and giving a very large input impedance. The insulating oxide between the gate and channel effectively isolates a MOSFET in one logic stage from earlier and later stages, which allows a single MOSFET output to drive a considerable number of MOSFET inputs. Bipolar transistor-based logic (such as TTL) does not have such a high fanout capacity. This isolation also makes it easier for the designers to ignore to some extent loading effects between logic stages independently. That extent is defined by the operating frequency: as frequencies increase, the input impedance of the MOSFETs decreases.
=== Analog ===
The MOSFET's advantages in digital circuits do not translate into supremacy in all analog circuits. The two types of circuit draw upon different features of transistor behavior. Digital circuits switch, spending most of their time either fully on or fully off. The transition from one to the other is only of concern with regards to speed and charge required. Analog circuits depend on operation in the transition region where small changes to Vgs can modulate the output (drain) current. The JFET and bipolar junction transistor (BJT) are preferred for accurate matching (of adjacent devices in integrated circuits), higher transconductance and certain temperature characteristics which simplify keeping performance predictable as circuit temperature varies.
Nevertheless, MOSFETs are widely used in many types of analog circuits because of their own advantages (zero gate current, high and adjustable output impedance and improved robustness vs. BJTs which can be permanently degraded by even lightly breaking down the emitter-base). The characteristics and performance of many analog circuits can be scaled up or down by changing the sizes (length and width) of the MOSFETs used. By comparison, in bipolar transistors the size of the device does not significantly affect its performance. MOSFETs' ideal characteristics regarding gate current (zero) and drain-source offset voltage (zero) also make them nearly ideal switch elements, and also make switched capacitor analog circuits practical. In their linear region, MOSFETs can be used as precision resistors, which can have a much higher controlled resistance than BJTs. In high power circuits, MOSFETs sometimes have the advantage of not suffering from thermal runaway as BJTs do. Also, MOSFETs can be configured to perform as capacitors and gyrator circuits which allow op-amps made from them to appear as inductors, thereby allowing all of the normal analog devices on a chip (except for diodes, which can be made smaller than a MOSFET anyway) to be built entirely out of MOSFETs. This means that complete analog circuits can be made on a silicon chip in a much smaller space and with simpler fabrication techniques. MOSFETS are ideally suited to switch inductive loads because of tolerance to inductive kickback.
Some ICs combine analog and digital MOSFET circuitry on a single mixed-signal integrated circuit, making the needed board space even smaller. This creates a need to isolate the analog circuits from the digital circuits on a chip level, leading to the use of isolation rings and silicon on insulator (SOI). Since MOSFETs require more space to handle a given amount of power than a BJT, fabrication processes can incorporate BJTs and MOSFETs into a single device. Mixed-transistor devices are called bi-FETs (bipolar FETs) if they contain just one BJT-FET and BiCMOS (bipolar-CMOS) if they contain complementary BJT-FETs. Such devices have the advantages of both insulated gates and higher current density.
=== RF CMOS ===
In the late 1980s, Asad Abidi pioneered RF CMOS technology, which uses MOS VLSI circuits, while working at UCLA. This changed the way in which RF circuits were designed, away from discrete bipolar transistors and towards CMOS integrated circuits. As of 2008, the radio transceivers in all wireless networking devices and modern mobile phones are mass-produced as RF CMOS devices. RF CMOS is also used in nearly all modern Bluetooth and wireless LAN (WLAN) devices.
== Analog switches ==
MOSFET analog switches use the MOSFET to pass analog signals when on, and as a high impedance when off. Signals flow in both directions across a MOSFET switch. In this application, the drain and source of a MOSFET exchange places depending on the relative voltages of the source/drain electrodes. The source is the more negative side for an N-MOS or the more positive side for a P-MOS. All of these switches are limited on what signals they can pass or stop by their gate–source, gate–drain, and source–drain voltages; exceeding the voltage, current, or power limits will potentially damage the switch.
=== Single-type ===
This analog switch uses a four-terminal simple MOSFET of either P or N type.
In the case of an n-type switch, the body is connected to the most negative supply (usually GND) and the gate is used as the switch control. Whenever the gate voltage exceeds the source voltage by at least a threshold voltage, the MOSFET conducts. The higher the voltage, the more the MOSFET can conduct. An N-MOS switch passes all voltages less than Vgate − Vtn. When the switch is conducting, it typically operates in the linear (or ohmic) mode of operation, since the source and drain voltages will typically be nearly equal.
In the case of a P-MOS, the body is connected to the most positive voltage, and the gate is brought to a lower potential to turn the switch on. The P-MOS switch passes all voltages higher than Vgate − Vtp (threshold voltage Vtp is negative in the case of enhancement-mode P-MOS).
=== Dual-type (CMOS) ===
This "complementary" or CMOS type of switch uses one P-MOS and one N-MOS FET to counteract the limitations of the single-type switch. The FETs have their drains and sources connected in parallel, the body of the P-MOS is connected to the high potential (VDD) and the body of the N-MOS is connected to the low potential (gnd). To turn the switch on, the gate of the P-MOS is driven to the low potential and the gate of the N-MOS is driven to the high potential. For voltages between VDD − Vtn and gnd − Vtp, both FETs conduct the signal; for voltages less than gnd − Vtp, the N-MOS conducts alone; and for voltages greater than VDD − Vtn, the P-MOS conducts alone.
The voltage limits for this switch are the gate–source, gate–drain and source–drain voltage limits for both FETs. Also, the P-MOS is typically two to three times wider than the N-MOS, so the switch will be balanced for speed in the two directions.
Tri-state circuitry sometimes incorporates a CMOS MOSFET switch on its output to provide for a low-ohmic, full-range output when on, and a high-ohmic, mid-level signal when off.
== MOS memory ==
The advent of the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores in computer memory. The first modern computer memory was introduced in 1965, when John Schmidt at Fairchild Semiconductor designed the first MOS semiconductor memory, a 64-bit MOS SRAM (static random-access memory). SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data.
MOS technology is the basis for DRAM (dynamic random-access memory). In 1966, Dr. Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM (dynamic random-access memory) memory cell, based on MOS technology. MOS memory enabled higher performance, was cheaper, and consumed less power, than magnetic-core memory, leading to MOS memory overtaking magnetic core memory as the dominant computer memory technology by the early 1970s.
Frank Wanlass, while studying MOSFET structures in 1963, noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM (erasable programmable read-only memory) technology. In 1967, Dawon Kahng and Simon Sze proposed that floating-gate memory cells, consisting of floating-gate MOSFETs (FGMOS), could be used to produce reprogrammable ROM (read-only memory). Floating-gate memory cells later became the basis for non-volatile memory (NVM) technologies including EPROM, EEPROM (electrically erasable programmable ROM) and flash memory.
=== Types of MOS memory ===
There are various different types of MOS memory. The following list includes various different MOS memory types.
== MOS sensors ==
A number of MOSFET sensors have been developed, for measuring physical, chemical, biological and environmental parameters. The earliest MOSFET sensors include the open-gate FET (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode.
By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.
The two main types of image sensors used in digital imaging technology are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on MOS technology, with the CCD based on MOS capacitors and the CMOS sensor based on MOS transistors.
=== Image sensors ===
MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.
The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The CMOS active-pixel sensor was later developed by Eric Fossum and his team at NASA's Jet Propulsion Laboratory in the early 1990s.
MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5 μm NMOS sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors.
=== Other sensors ===
MOS sensors, also known as MOSFET sensors, are widely used to measure physical, chemical, biological and environmental parameters. The ion-sensitive field-effect transistor (ISFET), for example, is widely used in biomedical applications.
MOSFETs are also widely used in microelectromechanical systems (MEMS), as silicon MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965.
Common applications of other MOS sensors include the following.
== Power MOSFET ==
The power MOSFET, which is commonly used in power electronics, was developed in the early 1970s. The power MOSFET enables low gate drive power, fast switching speed, and advanced paralleling capability.
The power MOSFET is the most widely used power device in the world. Advantages over bipolar junction transistors in power electronics include MOSFETs not requiring a continuous flow of drive current to remain in the ON state, offering higher switching speeds, lower switching power losses, lower on-resistances, and reduced susceptibility to thermal runaway. The power MOSFET had an impact on power supplies, enabling higher operating frequencies, size and weight reduction, and increased volume production.
Switching power supplies are the most common applications for power MOSFETs. They are also widely used for MOS RF power amplifiers, which enabled the transition of mobile networks from analog to digital in the 1990s. This led to the wide proliferation of wireless mobile networks, which revolutionised telecommunications systems. The LDMOS in particular is the most widely used power amplifier in mobile networks such as 2G, 3G, 4G and 5G, as well as broadcasting and amateur radio. Over 50 billion discrete power MOSFETs are shipped annually, as of 2018. They are widely used for automotive, industrial and communications systems in particular. Power MOSFETs are commonly used in automotive electronics, particularly as switching devices in electronic control units, and as power converters in modern electric vehicles. The insulated-gate bipolar transistor (IGBT), a hybrid MOS-bipolar transistor, is also used for a wide variety of applications.
LDMOS, a power MOSFET with lateral structure, is commonly used in high-end audio amplifiers and high-power PA systems. Their advantage is a better behaviour in the saturated region (corresponding to the linear region of a bipolar transistor) than the vertical MOSFETs. Vertical MOSFETs are designed for switching applications.
=== DMOS and VMOS ===
Power MOSFETs, including DMOS, LDMOS and VMOS devices, are commonly used for a wide range of other applications, which include the following.
=== RF DMOS ===
RF DMOS, also known as RF power MOSFET, is a type of DMOS power transistor designed for radio-frequency (RF) applications. It is used in various radio and RF applications, which include the following.
== Consumer electronics ==
MOSFETs are fundamental to the consumer electronics industry. According to Colinge, numerous consumer electronics would not exist without the MOSFET, such as digital wristwatches, pocket calculators, and video games, for example.
MOSFETs are commonly used for a wide range of consumer electronics, which include the following devices listed. Computers or telecommunication devices (such as phones) are not included here, but are listed separately in the Information and communications technology (ICT) section below.
=== Pocket calculators ===
One of the earliest influential consumer electronic products enabled by MOS LSI circuits was the electronic pocket calculator, as MOS LSI technology enabled large amounts of computational capability in small packages. In 1965, the Victor 3900 desktop calculator was the first MOS LSI calculator, with 29 MOS LSI chips. In 1967 the Texas Instruments Cal-Tech was the first prototype electronic handheld calculator, with three MOS LSI chips, and it was later released as the Canon Pocketronic in 1970. The Sharp QT-8D desktop calculator was the first mass-produced LSI MOS calculator in 1969, and the Sharp EL-8 which used four MOS LSI chips was the first commercial electronic handheld calculator in 1970. The first true electronic pocket calculator was the Busicom LE-120A HANDY LE, which used a single MOS LSI calculator-on-a-chip from Mostek, and was released in 1971. By 1972, MOS LSI circuits were commercialized for numerous other applications.
=== Audio-visual (AV) media ===
MOSFETs are commonly used for a wide range of audio-visual (AV) media technologies, which include the following list of applications.
=== Power MOSFET applications ===
Power MOSFETs are commonly used for a wide range of consumer electronics. Power MOSFETs are widely used in the following consumer applications.
== Information and communications technology (ICT) ==
MOSFETs are fundamental to information and communications technology (ICT), including modern computers, modern computing, telecommunications, the communications infrastructure, the Internet, digital telephony, wireless telecommunications, and mobile networks. According to Colinge, the modern computer industry and digital telecommunication systems would not exist without the MOSFET. Advances in MOS technology has been the most important contributing factor in the rapid rise of network bandwidth in telecommunication networks, with bandwidth doubling every 18 months, from bits per second to terabits per second (Edholm's law).
=== Computers ===
MOSFETs are commonly used in a wide range of computers and computing applications, which include the following.
=== Telecommunications ===
MOSFETs are commonly used in a wide range of telecommunications, which include the following applications.
=== Power MOSFET applications ===
== Insulated-gate bipolar transistor (IGBT) ==
The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). As of 2010, the IGBT is the second most widely used power transistor, after the power MOSFET. The IGBT accounts for 27% of the power transistor market, second only to the power MOSFET (53%), and ahead of the RF amplifier (11%) and bipolar junction transistor (9%). The IGBT is widely used in consumer electronics, industrial technology, the energy sector, aerospace electronic devices, and transportation.
The IGBT is widely used in the following applications.
== Quantum physics ==
=== 2D electron gas and quantum Hall effect ===
In quantum physics and quantum mechanics, the MOSFET is the basis for two-dimensional electron gas (2DEG) and the quantum Hall effect. The MOSFET enables physicists to study electron behavior in a two-dimensional gas, called a two-dimensional electron gas. In a MOSFET, conduction electrons travel in a thin surface layer, and a "gate" voltage controls the number of charge carriers in this layer. This allows researchers to explore quantum effects by operating high-purity MOSFETs at liquid helium temperatures.
In 1978, the Gakushuin University researchers Jun-ichi Wakabayashi and Shinji Kawaji observed the Hall effect in experiments carried out on the inversion layer of MOSFETs. In 1980, Klaus von Klitzing, working at the high magnetic field laboratory in Grenoble with silicon-based MOSFET samples developed by Michael Pepper and Gerhard Dorda, made the unexpected discovery of the quantum Hall effect.
=== Quantum technology ===
The MOSFET is used in quantum technology. A quantum field-effect transistor (QFET) or quantum well field-effect transistor (QWFET) is a type of MOSFET that takes advantage of quantum tunneling to greatly increase the speed of transistor operation.
== Transportation ==
MOSFETs are widely used in transportation. For example, they are commonly used for automotive electronics in the automotive industry. MOS technology is commonly used for a wide range of vehicles and transportation, which include the following applications.
=== Automotive industry ===
MOSFETs are widely used in the automotive industry, particularly for automotive electronics in motor vehicles. Automotive applications include the following.
=== Power MOSFET applications ===
Power MOSFETs are widely used in transportation technology, which includes the following vehicles.
In the automotive industry, power MOSFETs are widely used in automotive electronics, which include the following.
=== IGBT applications ===
The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). IGBTs are widely used in the following transportation applications.
=== Space industry ===
In the space industry, MOSFET devices were adopted by NASA for space research in 1964, for its Interplanetary Monitoring Platform (IMP) program and Explorers space exploration program. The use of MOSFETs was a major step forward in the electronics design of spacecraft and satellites. The IMP D (Explorer 33), launched in 1966, was the first spacecraft to use the MOSFET. Data gathered by IMP spacecraft and satellites were used to support the Apollo program, enabling the first crewed Moon landing with the Apollo 11 mission in 1969.
The Cassini–Huygens to Saturn in 1997 had spacecraft power distribution accomplished 192 solid-state power switch (SSPS) devices, which also functioned as circuit breakers in the event of an overload condition. The switches were developed from a combination of two semiconductor devices with switching capabilities: the MOSFET and the ASIC (application-specific integrated circuit). This combination resulted in advanced power switches that had better performance characteristics than traditional mechanical switches.
== Other applications ==
MOSFETs are commonly used for a wide range of other applications, which include the following.
== References == | Wikipedia/MOSFET_applications |
The first planar monolithic integrated circuit (IC) chip was demonstrated in 1960. The idea of integrating electronic circuits into a single device was born when the German physicist and engineer Werner Jacobi developed and patented the first known integrated transistor amplifier in 1949 and the British radio engineer Geoffrey Dummer proposed to integrate a variety of standard electronic components in a monolithic semiconductor crystal in 1952. A year later, Harwick Johnson filed a patent for a prototype IC. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other.
These ideas could not be implemented by the industry, until a breakthrough came in late 1958. Three people from three U.S. companies solved three fundamental problems that hindered the production of integrated circuits. Jack Kilby of Texas Instruments patented the principle of integration, created the first prototype ICs and commercialized them. Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (monolithic IC) chip. Between late 1958 and early 1959, Kurt Lehovec of Sprague Electric Company developed a way to electrically isolate components on a semiconductor crystal, using p–n junction isolation.
The first monolithic IC chip was invented by Robert Noyce of Fairchild Semiconductor. He invented a way to connect the IC components (aluminium metallization) and proposed an improved version of insulation based on the planar process technology developed by Jean Hoerni. On September 27, 1960, using the ideas of Noyce and Hoerni, a group of Jay Last's at Fairchild Semiconductor created the first operational semiconductor IC. Texas Instruments, which held the patent for Kilby's invention, started a patent war, which was settled in 1966 by the agreement on cross-licensing.
There is no consensus on who invented the IC. The American press of the 1960s named four people: Kilby, Lehovec, Noyce and Hoerni; in the 1970s the list was shortened to Kilby and Noyce. Kilby was awarded the 2000 Nobel Prize in Physics "for his part in the invention of the integrated circuit". In the 2000s, historians Leslie Berlin, Bo Lojek and Arjun Saxena reinstated the idea of multiple IC inventors and revised the contribution of Kilby. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's hybrid IC.
== Prerequisites ==
=== Waiting for a breakthrough ===
During and immediately after World War II a phenomenon named "the tyranny of numbers" was noticed, that is, some computational devices reached a level of complexity at which the losses from failures and downtime exceeded the expected benefits. Each Boeing B-29 (put into service in 1944) carried 300–1000 vacuum tubes and tens of thousands of passive components. The number of vacuum tubes reached thousands in advanced computers and more than 17,000 in the ENIAC (1946). Each additional component reduced the reliability of a device and lengthened the troubleshooting time. Traditional electronics reached a deadlock and a further development of electronic devices required reducing the number of their components.
The invention of the first transistor in 1947 led to the expectation of a new technological revolution. Fiction writers and journalists heralded the imminent appearance of "intelligent machines" and robotization of all aspects of life. Although transistors did reduce the size and power consumption, they could not solve the problem of reliability of complex electronic devices. On the contrary, dense packing of components in small devices hindered their repair. While the reliability of discrete components was brought to the theoretical limit in the 1950s, there was no improvement in the connections between the components.
=== Idea of integration ===
Early developments of the integrated circuit go back to 1949, when the Siemens engineer Werner Jacobi filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a 3-stage amplifier arrangement with two transistors working "upside-down" as impedance converter. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported.
On May 7, 1952, the British radio engineer Geoffrey Dummer formulated the idea of integration in a public speech in Washington:
With the advent of the transistor and the work in semiconductors generally, it seems now to be possible to envisage electronic equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying and amplifying materials, the electrical functions being connected by cutting out areas of the various layers.
Dummer later became famous as "the prophet of integrated circuits", but not as their inventor. In 1956 he produced an IC prototype by growth from the melt, but his work was deemed impractical by the UK Ministry of Defence, because of the high cost and inferior parameters of the IC compared to discrete devices.
In May 1952, Sidney Darlington filed a patent application in the United States for a structure with two or three transistors integrated onto a single chip in various configurations; in October 1952, Bernard Oliver filed a patent application for a method of manufacturing three electrically connected planar transistors on one semiconductor crystal.
On May 21, 1953, Harwick Johnson filed a patent application for a method of forming various electronic components – transistors, resistors, lumped and distributed capacitances – on a single chip. Johnson described three ways of producing an integrated one-transistor oscillator. All of them used a narrow strip of a semiconductor with a bipolar transistor on one end and differed in the methods of producing the transistor. The strip acted as a series of resistors; the lumped capacitors were formed by fusion whereas inverse-biased p-n junctions acted as distributed capacitors. Johnson did not offer a technological procedure, and it is not known whether he produced an actual device. In 1959, a variant of his proposal was implemented and patented by Jack Kilby.
In 1957, Yasuo Tarui, at MITI's Electrotechnical Laboratory near Tokyo, fabricated a "quadrupole" transistor, a form of unipolar (field-effect transistor) and a bipolar junction transistor on the same chip. These early devices featured designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other.
=== Functional electronics ===
The leading US electronics companies (Bell Labs, IBM, RCA and General Electric) sought solution to "the tyranny of numbers" in the development of discrete components that implemented a given function with a minimum number of attached passive elements. During the vacuum tube era, this approach allowed to reduce the cost of a circuit at the expense of its operation frequency. For example, a memory cell of the 1940s consisted of two triodes and a dozen passive components and ran at frequencies up to 200 kHz. A MHz response could be achieved with two pentodes and six diodes per cell. This cell could be replaced by one thyratron with a load resistor and an input capacitor, but the operating frequency of such circuit did not exceed a few kHz.
In 1952, Jewell James Ebers from Bell Labs developed a prototype solid-state analog of thyratron – a four-layer transistor, or thyristor. William Shockley simplified its design to a two-terminal "four-layer diode" (Shockley diode) and attempted its industrial production. Shockley hoped that the new device would replace the polarized relay in telephone exchanges; however, the reliability of Shockley diodes was unacceptably low, and his company went into decline.
At the same time, works on thyristor circuits were carried at Bell Labs, IBM and RCA. Ian Munro Ross and L. Arthur D'Asaro (Bell Labs) experimented with thyristor-based memory cells. Joe Logue and Rick Dill (IBM) were building counters using monojunction transistors. J. Torkel Wallmark and Harwick Johnson (RCA) used both the thyristors and field-effect transistors. The works of 1955–1958 that used germanium thyristors were fruitless. Only in the summer of 1959, after the inventions of Kilby, Lehovec and Hoerni became publicly known, D'Asaro reported an operational shift register based on silicon thyristors. In this register, one crystal containing four thyristors replaced eight transistors, 26 diodes and 27 resistors. The area of each thyristor ranged from 0.2 to 0.4 mm2, with a thickness of about 0.1 mm. The circuit elements were isolated by etching deep grooves.
From the point of view of supporters of functional electronics, semiconductor era, their approach was allowed to circumvent the fundamental problems of semiconductor technology. The failures of Shockley, Ross and Wallmark proved the fallacy of this approach: the mass production of functional devices was hindered by technological barriers.
=== Silicon technology ===
Early transistors were made of germanium. By the mid-1950s it was replaced by silicon which could operate at higher temperatures. In 1954, Gordon Kidd Teal from Texas Instruments produced the first silicon transistor, which became commercial in 1955. Also in 1954, Fuller and Dittsenberger published a fundamental study of diffusion in silicon, and Shockley suggested using this technology to form p-n junctions with a given profile of the impurity concentration.
In early 1955, Carl Frosch from Bell Labs developed wet oxidation of silicon, and in the next two years, Frosch, Moll, Fuller and Holonyak did further research on it. In 1957 Frosch and Derick, published their work on the first manufactured silicon dioxide semiconductor-oxide transistors; the first planar transistors, in which drain and source were adjacent at the same surface. This accidental discovery revealed the second fundamental advantage of silicon over germanium: contrary to germanium oxides, "wet" silica is a physically strong and chemically inert electrical insulator.
==== Surface passivation ====
Surface passivation, the process by which a semiconductor surface is rendered inert, and does not change semiconductor properties as a result of interaction with air or other materials in contact with the surface or edge of the crystal, was first discovered by Carl Frosch and Lincoln Derick at Bell Labs between 1955 and 1957. Frosch and Derrick showed that a silicon dioxide (SiO2) layer protected silicon wafers against the environment, masked against dopant diffusion into silicon and electrically insulated and demonstrated it by creating the first silicon dioxide transistors, the first transistors in which drain and source were adjacent at the surface insulated by a SiO2 layer.
==== Planar process ====
At Bell Labs, the importance of Frosch's technique was immediately realized. Results of Frosch and Derick work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni. Later, Hoerni attended a meeting where Atalla presented a paper about passivation based on the previous results at Bell Labs. Taking advantage of silicon dioxide's passivating effect on the silicon surface, Hoerni proposed to make transistors that were protected by a layer of silicon dioxide.
Jean Hoerni first proposed a planar technology of bipolar transistors. In this process, all the p-n junctions were covered by a protective layer, which should significantly improve reliability. However, at the time, this proposal was considered technically impossible. The formation of the emitter of an n-p-n transistor required diffusion of phosphorus, and the work of Frosch suggested that SiO2 does not block such diffusion. In March 1959, Chih-Tang Sah, a former colleague of Hoerni, pointed Hoerni and Noyce to an error in the conclusions of Frosch. Frosch used a thin oxide layer, whereas the experiments of 1957–1958 showed that a thick layer of oxide can stop the phosphorus diffusion.
Armed with the above knowledge, by March 12, 1959, Hoerni made the first prototype of a planar transistor, and on May 1, 1959, filed a patent application for the invention of the planar process. In April 1960, Fairchild launched the planar transistor 2N1613, and by October 1960 completely abandoned the mesa transistor technology. By the mid-1960s, the planar process has become the main technology of producing transistors and monolithic integrated circuits.
== Three problems of microelectronics ==
The creation of the integrated circuit was hindered by three fundamental problems, which were formulated by Wallmark in 1958:
Integration. In 1958, there was no way of forming many different electronic components in one semiconductor crystal. Alloying was not suited to the IC and the latest mesa technology had serious problems with reliability.
Isolation. There was no technology to electrically isolate components on one semiconductor crystal.
Connection. There was no effective way to create electrical connections between the components of an IC, except for the extremely expensive and time-consuming connection using gold wires.
It happened so that three different companies held the key patents to each of these problems. Sprague Electric Company decided not to develop ICs, Texas Instruments limited itself to an incomplete set of technologies, and only Fairchild Semiconductor combined all the techniques required for a commercial production of monolithic ICs.
=== Integration by Jack Kilby ===
==== Kilby's hybrid IC ====
In May 1958, Jack Kilby, an experienced radio engineer and a veteran of World War II, started working at Texas Instruments. At first, he had no specific tasks and had to find himself a suitable topic in the general direction of "miniaturization". He had a chance of either finding a radically new research direction or blend into a multimillion-dollar project on the production of military circuits. In the summer of 1958, Kilby formulated three features of integration:
The only thing that a semiconductor company can successfully produce is semiconductors.
All circuit elements, including resistors and capacitors can be made of a semiconductor.
All circuit components can be formed on one semiconductor crystal, adding only the interconnections.
On August 28, 1958, Kilby assembled the first prototype of an IC using discrete components and received approval for implementing it on one chip. He had access to technologies that could form mesa transistors, mesa diodes and capacitors based on p-n junctions on a germanium (but not silicon) chip, and the bulk material of the chip could be used for resistors. The standard Texas Instruments chip for the production of 25 (5×5) mesa transistors was 10×10 mm in size. Kilby cut it into five-transistor 10×1.6 mm strips, but later used not more than two of them. On September 12, he presented the first IC prototype, which was a single-transistor oscillator with a distributed RC feedback, repeating the idea and the circuit in the 1953 patent by Johnson. On September 19, he made the second prototype, a two-transistor trigger. He described these ICs, referencing the Johnson's patent, in his U.S. patent 3,138,743.
Between February and May 1959 Kilby filed a series of applications: U.S. patent 3,072,832, U.S. patent 3,138,743, U.S. patent 3,138,744, U.S. patent 3,115,581 and U.S. patent 3,261,081. According to Arjun Saxena, the application date for the key patent 3,138,743 is uncertain: while the patent and the book by Kilby set it to February 6, 1959, it could not be confirmed by the application archives of the federal patent office. He suggested that the initial application was filed on February 6 and lost, and the (preserved) resubmission was received by the patent office on 6 May 1959 – the same date as the applications for the patents 3,072,832 and 3,138,744. Texas Instruments introduced the inventions by Kilby to the public on March 6, 1959.
None of these patents solved the problem of isolation and interconnection – the components were separated by cutting grooves on the chip and connected by gold wires. Thus these ICs were of the hybrid rather than monolithic type. However, Kilby demonstrated that various circuit elements: active components, resistors, capacitors and even small inductances can be formed on one chip.
==== Commercialization attempts ====
In autumn 1958, Texas Instruments introduced the yet non-patented idea of Kilby to military customers. While most divisions rejected it as unfit to the existing concepts, the US Air Force decided that this technology complied with their molecular electronics program, and ordered production of prototype ICs, which Kilby named "functional electronic blocks". Westinghouse added epitaxy to the Texas Instruments technology and received a separate order from the US military in January 1960.
In April 1960, Texas Instruments announced multivibrator #502 as the world's first integrated circuit available on the market. The company assured that contrary to the competitors they actually sell their product, at a price of US$450 per unit or US$300 for quantities larger than 100 units. However, the sales began only in the summer of 1961, and the price was higher than announced. The #502 schematic contained two transistors, four diodes, six resistors and two capacitors, and repeated the traditional discrete circuitry. The device contained two Si strips of 5 mm length inside a metal-ceramic housing. One strip contained input capacitors; the other accommodated mesa transistors and diodes, and its grooved body was used as six resistors. Gold wires acted as interconnections.
In October 1961, Texas Instruments built for the Air Force a demonstration "molecular computer" with a 300-bit memory. Kilby's colleague Harvey Cragon packed this computer into a volume of a little over 100 cm3, using 587 ICs to replace around 8,500 transistors and other components that would be needed to perform the equivalent function. In December 1961, the Air Force accepted the first analog device created within the molecular electronics program – a radio receiver. It uses costly ICs, which had less than 10–12 components and a high percentage of failed devices. This generated an opinion that ICs can only justify themselves for aerospace applications. However, the aerospace industry rejected those ICs for the low radiation hardness of their mesa transistors.
=== Isolation by p-n junction ===
==== Solution by Kurt Lehovec ====
In late 1958, Kurt Lehovec, a scientist working at the Sprague Electric Company, attended a seminar at Princeton where Wallmark outlined his vision of the fundamental problems in microelectronics. On his way back to Massachusetts, Lehovec found a simple solution to the isolation problem which used the p-n junction:
It is well-known that a p-n junction has a high impedance to electric current, particularly if biased in the so-called blocking direction, or with no bias applied. Therefore, any desired degree of electrical insulation between two components assembled on the same slice can be achieved by having a sufficiently large number of p-n junctions in series between two semiconducting regions on which said components are assembled. For most circuits, one to three junctions will be sufficient...
Lehovec tested his idea using the technologies of making transistors that were available at Sprague. His device was a linear structure 2.2×0.5×0.1 mm in size, which was divided into isolated n-type cells (bases of the future transistors) by p-n junctions. Layers and transitions were formed by growth from the melt. The conductivity type was determined by the pulling speed of the crystal: an indium-rich p-type layer was formed at a slow speed, whereas an arsenic-rich n-type layer was produced at a high speed. The collectors and emitters of the transistors were created by welding indium beads. All electrical connections were made by hand, using gold wires.
The management of Sprague showed no interest to the invention by Lehovec. Nevertheless, on April 22, 1959, he filed a patent application at his own expense, and then left the United States for two years. Because of this disengagement, Gordon Moore concluded that Lehovec should not be considered as an inventor of the integrated circuit.
==== Solution by Robert Noyce ====
On January 14, 1959, Jean Hoerni introduced his latest version of the planar process to Robert Noyce and a patent attorney John Rallza at Fairchild Semiconductor. A memo of this event by Hoerni was the basis of a patent application for the invention of a planar process, filed in May 1959, and implemented in U.S. patent 3,025,589 (the planar process) and U.S. patent 3,064,167 (the planar transistor). On January 20, 1959, Fairchild managers met with Edward Keonjian, the developer of the onboard computer for the rocket "Atlas", to discuss the joint development of hybrid digital ICs for his computer. These events probably led Robert Noyce to return to the idea of integration.
On January 23, 1959, Noyce documented his vision of the planar integrated circuit, essentially re-inventing the ideas of Kilby and Lehovec on the base of the Hoerni's planar process. Noyce claimed in 1976 that in January 1959 he did not know about the work of Lehovec.
As an example, Noyce described an integrator that he discussed with Keonjian. Transistors, diodes and resistors of that hypothetical device were isolated from each other by p-n junctions, but in a different manner from the solution by Lehovec. Noyce considered the IC manufacturing process as follows. It should start with a chip of highly resistive intrinsic (undoped) silicon passivated with an oxide layer. The first photolithography step aims to open windows corresponding to the planned devices, and diffuse impurities to create low-resistance "wells" through the entire thickness of the chip. Then traditional planar devices are formed inside those wells. Contrary to the solution by Lehovec, this approach created two-dimensional structures and fit a potentially unlimited number of devices on a chip.
After formulating his idea, Noyce shelved it for several months due to pressing company matters, and returned to it only by March 1959. It took him six months to prepare a patent application, which was then rejected by the US Patent Office because they already received the application by Lehovec. Noyce revised his application and in 1964 received U.S. patent 3,150,299 and U.S. patent 3,117,260.
=== Invention of metallization ===
In early 1959, Noyce solved another important problem, the problem of interconnections that hindered mass-production of ICs. According to the colleagues from the traitorous eight his idea was self-evident: of course, the passivating oxide layer forms a natural barrier between the chip and the metallization layer. According to Turner Hasty, who worked with Kilby and Noyce, Noyce planned to make the microelectronic patents of Fairchild accessible to a wide range of companies, similar to Bell Labs which in 1951–1952 released their transistor technologies.
Noyce submitted his application on July 30, 1959, and on April 25, 1961, received U.S. patent 2,981,877. According to the patent, the invention consisted of preserving the oxide layer, which separated the metallization layer from the chip (except for the contact window areas), and of depositing the metal layer so that it is firmly attached to the oxide. The deposition method was not yet known, and the proposals by Noyce included vacuum deposition of aluminium through a mask and deposition of a continuous layer, followed by photolithography and etching off the excess metal. According to Saxena, the patent by Noyce, with all its drawbacks, accurately reflects the fundamentals of the modern IC technologies.
In his patent, Kilby also mentions the use of metallization layer. However, Kilby favored thick coating layers of different metals (aluminium, copper or antimony-doped gold) and silicon monoxide instead of the dioxide. These ideas were not adopted in the production of ICs.
== First monolithic integrated circuits ==
In August 1959, Noyce formed at Fairchild a group to develop integrated circuits. On May 26, 1960, this group, led by Jay Last, produced the first planar integrated circuit. This prototype was not monolithic – two pairs of its transistors were isolated by cutting a groove on the chip, according to the patent by Last. The initial production stages repeated the Hoerni's planar process. Then the 80-micron-thick crystal was glued, face down, to the glass substrate, and additional photolithography was carried on the back surface. Deep etching created a groove down to the front surface. Then the back surface was covered with an epoxy resin, and the chip was separated from the glass substrate.
In August 1960, Last started working on the second prototype, using the isolation by p-n junction proposed by Noyce. Robert Norman developed a trigger circuit on four transistors and five resistors, whereas Isy Haas and Lionel Kattner developed the process of boron diffusion to form the insulating regions. The first operational device was tested on September 27, 1960 – this was the first planar and monolithic integrated circuit.
Fairchild Semiconductor did not realize the importance of this work. Vice president of marketing believed that Last was wasting the company resources and that the project should be terminated. In January 1961, Last, Hoerni and their colleagues from the "traitorous eight" Kleiner and Roberts left Fairchild and headed Amelco. David Allison, Lionel Kattner and some other technologists left Fairchild to establish a direct competitor, the company Signetics.
The first integrated circuit purchase order was for 64 logic elements at $1000 each, with samples of proposed packaging delivered to MIT in 1960 and the 64 Texas Instruments integrated circuits in 1962.
Despite the departure of their leading scientists and engineers, in March 1961 Fairchild announced their first commercial IC series, named "Micrologic", and then spent a year on creating a family of logic ICs. By that time ICs were already produced by their competitors. Texas Instruments abandoned the IC designs by Kilby and received a contract for a series of planar ICs for space satellites, and then for the LGM-30 Minuteman ballistic missiles.
NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965.
Whereas the ICs for the onboard computers of the Apollo spacecraft were designed by Fairchild, most of them were produced by Raytheon and Philco Ford. Each of these computers contained about 5,000 standard logic ICs, and during their manufacture, the price for an IC dropped from US$1,000 to US$20–30. In this way, NASA and the Pentagon prepared the ground for the non-military IC market. The first monolithic integrated circuits, including all the logic ICs in the Apollo Guidance Computer, were 3-input resistor-transistor logic NOR gates.
The resistor-transistor logic of first ICs by Fairchild and Texas Instruments was vulnerable to electromagnetic interference, and therefore in 1964 both companies replaced it by the diode-transistor logic [91]. Signetics released the diode-transistor family Utilogic back in 1962, but fell behind Fairchild and Texas Instruments with the expansion of production. Fairchild was the leader in the number of ICs sold in 1961–1965, but Texas Instruments was ahead in the revenue: 32% of the IC market in 1964 compared to 18% of Fairchild.
=== TTL integrated circuits ===
The above logic ICs were built from standard components, with sizes and configurations defined by the technological process, and all the diodes and transistors on one IC were of the same type. The use of different transistor types was first proposed by Tom Long at Sylvania during 1961–1962.
In 1961, transistor–transistor logic (TTL) was invented by James L. Buie. In late 1962, Sylvania launched the first family of transistor-transistor logic (TTL) ICs, which became a commercial success. Bob Widlar from Fairchild made a similar breakthrough in 1964–1965 in analog ICs (operational amplifiers). TTL became the dominant IC technology during the 1970s to early 1980s.
=== MOS integrated circuit ===
The MOSFET was invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered surface passivation by silicon dioxide and used their finding to create the first planar transistors, the first field effect transistors in which drain and source were adjacent at the same surface. The MOSFET made it possible to build high-density integrated circuits. Nearly all modern ICs are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962.
General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. The MOSFET has since become the most critical device component in modern ICs.
== Patent wars of 1962–1966 ==
In 1959–1961 years, when Texas Instruments and Westinghouse worked in parallel on aviation "molecular electronics", their competition had a friendly character. The situation changed in 1962 when Texas Instruments started to zealously pursue the real and imaginary infringers of their patents and received the nicknames "The Dallas legal firm" and "semiconductor cowboys". This example was followed by some other companies. Nevertheless, the IC industry continued to develop no matter the patent disputes. In the early 1960s, the US Appeals Court ruled that Noyce was the inventor of the monolithic integrated circuit chip based on adherent oxide and junction isolation technologies.
Texas Instruments v. Westinghouse
In 1962–1963, when these companies have adopted the planar process, the Westinghouse engineer Hung-Chang Lin invented the lateral transistor. In the usual planar process, all transistors have the same conductivity type, typically n-p-n, whereas the invention by Lin allowed creation of n-p-n and p-n-p transistors on one chip. The military orders that were anticipated by Texas Instruments went to Westinghouse. TI filed a case, which was settled out of court.
Texas Instruments v. Sprague
On April 10, 1962, Lehovec received a patent for isolation by p-n junction. Texas Instruments immediately filed a court case claiming that the isolation problem was solved in their earlier patent filed by Kilby. Robert Sprague, the founder of Sprague, considered the case hopeless and was going to give up the patent rights, was convinced otherwise by Lehovec. Four years later, Texas Instruments hosted in Dallas an arbitration hearing with demonstrations of the Kilby's inventions and depositions by experts. However, Lehovec conclusively proved that Kilby did not mention isolation of components. His priority on the isolation patent was finally acknowledged in April 1966.
Raytheon v. Fairchild
On May 20, 1962, Jean Hoerni, who had already left Fairchild, received the first patent on the planar technology. Raytheon believed that Hoerni repeated the patent held by Jules Andrews and Raytheon and filed a court case. While appearing similar in the photolithography, diffusion and etching processes, the approach of Andrews had a fundamental flaw: it involved the complete removal of the oxide layer after each diffusion. On the contrary, in the process of Hoerni the "dirty" oxide was kept. Raytheon withdrew their claim and obtained a license from Fairchild.
Hughes v. Fairchild
Hughes Aircraft sued Fairchild arguing that their researchers developed the Hoerni's process earlier. According to Fairchild lawyers, this case was baseless, but could take a few years, during which Fairchild could not sell the license to Hoerni's process. Therefore, Fairchild chose to settle with Hughes out of court. Hughes acquired the rights to one of the seventeen points of the Hoerni's patent, and then exchanged it for a small percentage of the future licensing incomes of Fairchild.
Texas Instruments v. Fairchild
In their legal wars, Texas Instruments focused on their largest and most technologically advanced competitor, Fairchild Semiconductor. Their cases hindered not the production at Fairchild, but the sale of licenses for their technologies. By 1965, the planar technology of Fairchild became the industry standard, but the license to patents of Hoerni and Noyce was purchased by less than ten manufacturers, and there were no mechanisms to pursue unlicensed production. Similarly, the key patents of Kilby were bringing no income to Texas Instruments. In 1964, the patent arbitration awarded Texas Instruments the rights to four of the five key provisions of the contested patents, but both companies appealed the decision. The litigation could continue for years, if not for the defeat of Texas Instruments in the dispute with Sprague in April 1966. Texas Instruments realized that they could not claim priority for the whole set of key IC patents, and lost interest in the patent war. In the summer of 1966, Texas Instruments and Fairchild agreed on the mutual recognition of patents and cross-licensing of key patents; in 1967 they were joined by Sprague.
Japan v. Fairchild
In the early 1960s, both Fairchild and Texas Instruments tried to set up IC production in Japan, but were opposed by the Japan Ministry of International Trade and Industry (MITI). In 1962, MITI banned Fairchild from further investments in the factory that they already purchased in Japan, and Noyce tried to enter the Japanese market through the corporation NEC. In 1963, the management of NEC pushed Fairchild to extremely advantageous for Japan licensing terms, strongly limiting the Fairchild sales in the Japanese market. Only after concluding the deal Noyce learned that the president of NEC also chaired the MITI committee that blocked the Fairchild deals.
Japan v. Texas Instruments
In 1963, despite the negative experience with NEC and Sony, Texas Instruments tried to establish their production in Japan. For two years MITI did not give a definite answer to the request, and in 1965 Texas Instruments retaliated by threatening with embargo on the import of electronic equipment that infringed their patents. This action hit Sony in 1966 and Sharp in 1967, prompting MITI to secretly look for a Japanese partner to Texas Instruments. MITI blocked the negotiations between Texas Instruments and Mitsubishi (the owner of Sharp), and persuaded Akio Morita to make a deal with Texas Instruments "for the future of Japanese industry". Despite the secret protocols that guaranteed the Americans a share in Sony the agreement of 1967–1968 was extremely disadvantageous for Texas Instruments. For almost thirty years, Japanese companies were producing ICs without paying royalties to Texas Instruments, and only in 1989 the Japanese court acknowledged the patent rights to the invention by Kilby. As a result, in the 1990s, all of Japanese IC manufacturers had to pay for the 30 years old patent or enter into cross-licensing agreements. In 1993, Texas Instruments earned US$520 million in license fees, mostly from Japanese companies.
== Historiography ==
=== Two inventors: Kilby and Noyce ===
During the patent wars of the 1960s the press and professional community in the United States recognized that the number of the IC inventors could be rather large. The book "Golden Age of Entrepreneurship" named four people: Kilby, Lehovec, Noyce and Hoerni. Sorab Ghandhi in "Theory and Practice of Microelectronics" (1968) wrote that the patents of Lehovec and Hoerni were the high point of semiconductor technology of the 1950s and opened the way for the mass production of ICs.
In October 1966, Kilby and Noyce were awarded the Ballantine Medal from the Franklin Institute "for their significant and essential contribution to the development of integrated circuits". This event initiated the idea of two inventors. The nomination of Kilby was criticized by contemporaries who did not recognize his prototypes as "real" semiconductor ICs. Even more controversial was the nomination of Noyce: the engineering community was well aware of the role of the Moore, Hoerni and other key inventors, whereas Noyce at the time of his invention was CEO of Fairchild and did not participate directly in the creation of the first IC. Noyce himself admitted, "I was trying to solve a production problem. I wasn't trying to make an integrated circuit".
According to Leslie Berlin, Noyce became the "father of the integrated circuit" because of the patent wars. Texas Instruments picked his name because it stood on the patent they challenged and thereby "appointed" him as a sole representative of all the development work at Fairchild. In turn, Fairchild mobilized all its resources to protect the company, and thus the priority of Noyce. While Kilby was personally involved in the public relation campaigns of Texas Instruments, Noyce kept away from publicity and was substituted by Gordon Moore.
By the mid-1970s, the two-inventor version became widely accepted, and the debates between Kilby and Lehovec in professional journals in 1976–1978 did not change the situation. Hoerni, Last and Lehovec were regarded as minor players; they did not represent large corporations and were not keen for public priority debates.
In scientific articles of the 1980s, the history of IC invention was often presented as follows
While at Fairchild, Noyce developed the integrated circuit. The same concept has been invented by Jack Kilby at Texas Instruments in Dallas a few months previously. In July 1959 Noyce filed a patent for his conception of the integrated circuit. Texas Instruments filed a lawsuit for patent interference against Noyce and Fairchild, and the case dragged on for some years. Today, Noyce and Kilby are usually regarded as co-inventors of the integrated circuit, although Kilby was inducted into the Inventor's Hall of Fame as the inventor. In any event, Noyce is credited with improving the integrated circuit for its many applications in the field of microelectronics.
In 1984, the two-inventor version has been further supported by Thomas Reid in "The Chip: How Two Americans Invented the Microchip and Launched a Revolution". Robert Wright of The New York Times criticized Reid for a lengthy description of the supporting characters involved in the invention, yet the contributions of Lehovec and Last were not mentioned, and Jean Hoerni appears in the book only as a theorist who consulted Noyce.
Paul Ceruzzi in "A History of Modern Computing" (2003) also repeated the two-inventor story and stipulated that "Their invention, dubbed at first Micrologic, then the Integrated Circuit by Fairchild, was simply another step along this path" (of miniaturization demanded by the military programs of the 1950s). Referring to the prevailing in the literature opinion, he put forward the decision of Noyce to use the planar process of Hoerni, who paved the way for the mass production of ICs, but was not included in the list of IC inventors. Ceruzzi did not cover the invention of isolation of IC components.
In 2000, the Nobel Committee awarded the Nobel Prize in Physics to Kilby "for his part in the invention of the integrated circuit". Noyce died in 1990 and thus could not be nominated; when asked during his life about the prospects of the Nobel Prize he replied "They don't give Nobel Prizes for engineering or real work". Because of the confidentiality of the Nobel nomination procedure, it is not known whether other IC inventors had been considered. Saxena argued that the contribution of Kilby was pure engineering rather than basic science, and thus his nomination violated the will of Alfred Nobel.
The two-inventor version persisted through the 2010s. Its variation puts Kilby in front, and considers Noyce as an engineer who improved the Kilby's invention. Fred Kaplan in his popular book "1959: The Year Everything Changed" (2010) spends eight pages on the IC invention and assigns it to Kilby, mentioning Noyce only in a footnote and neglecting Hoerni and Last.
=== Later revisionism ===
In the late 1990s and 2000s, a series of books presented the IC invention beyond the simplified two-person story. In 1998, Michael Riordan and Lillian Hoddson detailed the events leading to the invention of Kilby in "Crystal Fire: The Birth of the Information Age". However, they stopped on that invention. In her 2005 biography of Robert Noyce, Leslie Berlin included the events unfolding at Fairchild and critically evaluated the contribution of Kilby. According to Berlin, the connecting wires "precluded the device from being manufactured in any quantity" which "Kilby was well aware" of.
In 2007, Bo Lojek opposed the two-inventor version; he described the contributions of Hoerni and Last, and criticized Kilby. In 2009, Saxena described the work of Lehovec, and Hoerni. He also played down the role of Kilby and Noyce.
== See also ==
History of the integrated circuit
== Notes ==
== References ==
== Bibliography ==
Berlin, L. (2005). The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley. New York: Oxford University Press. ISBN 978-0-199-83977-3.
Brock, D. (2010). Lécuyer, C.; et al. (eds.). Makers of the Microchip: A Documentary History of Fairchild Semiconductor. MIT Press. ISBN 978-0-262-01424-3.
Ceruzzi, P. E. (2003). A History of Modern Computing. MIT Press. ISBN 978-0-262-53203-7.
Flamm, K (1996). Mismanaged Trade: Strategic Policy and the Semiconductor Industry. Brookings Institution Press. ISBN 978-0-815-72846-7.
Hubner, Kurt (1998). "The four-layer diode in the cradle of Silicon Valley". In Tsuya, H.; Huff, Howard R.; GöSele, U. (eds.). Silicon Materials Science and Technology: Proceedings of the Eighth International Symposium on Silicon Materials Science and Technology. The Electrochemical Society. pp. 99–115. ISBN 978-1-566-77193-1.
Kaplan, F. (2010). 1959: The Year Everything Changed. Wiley. ISBN 978-0-470-60203-4.
Kilby, J. (1976). "Invention of the Integrated Circuit" (PDF). IEEE Transactions on Electron Devices. 23 (7): 648–654. Bibcode:1976ITED...23..648K. doi:10.1109/t-ed.1976.18467. S2CID 19598101. Archived from the original (PDF) on 2016-03-04.
Kilby, Jack S. (December 8, 2000). "Turning Potential Into Realities: The Invention of the Integrated Circuit" (PDF). In Gösta Ekspong (ed.). Nobel Lectures, Physics 1996-2000. Singapore: World Scientific Publishing Co. (published 2002). pp. 474–485.
Lojek, Bo (2007). History of Semiconductor Engineering. Springer. ISBN 978-3-540-34257-1. Internet Archive eBook ISBN 978-3-540-34258-8.
Sah, Chih-Tang (October 1988). "Evolution of the MOS transistor-from conception to VLSI" (PDF). Proceedings of the IEEE. 76 (10): 1280–1326. Bibcode:1988IEEEP..76.1280S. doi:10.1109/5.16328. ISSN 0018-9219.
Saxena, A. (2009). Invention of integrated circuits: untold important facts. International series on advances in solid state electronics and technology. World Scientific. ISBN 978-9-812-81445-6. | Wikipedia/Invention_of_the_integrated_circuit |
A power management integrated circuit (PMIC) is an integrated circuit for power management. Although it is a wide range of chip types, most include several DC/DC converters or their control part. A PMIC is often included in battery-operated devices (such as mobile phone, portable media players) and embedded devices (such as routers) to decrease the amount of space required.
== Overview ==
The term PMIC refers to a class of integrated circuits that perform various functions related to power requirements.
A PMIC may have one or more of the following functions:
DC-to-DC conversion
Battery charging
Power-source selection
Voltage scaling
Power sequencing
Miscellaneous functions
Power management ICs are solid-state devices that control the flow and direction of electrical power. Many electrical devices use multiple internal voltages (e.g., 5 V, 3.3 V, 1.8 V, etc.) and sources of external power (e.g., wall outlet, battery, etc.), meaning that the power design of the device has multiple requirements for operation. A PMIC can refer to any chip that is an individual power related function, but generally refer to ICs that incorporate more than one function such as different power conversions and power controls such as voltage supervision and undervoltage protection. By incorporating these functions into one IC, a number of improvements to the overall design can be made such as better conversion efficiency, smaller solution size, and better heat dissipation.
== Features ==
A PMIC may include battery management, voltage regulation, and charging functions. It may include a DC to DC converter to allow dynamic voltage scaling. Some models are known to feature up to 95% power conversion efficiency. Some models integrate with dynamic frequency scaling in a combination known as DVFS (dynamic voltage and frequency scaling).
It may be manufactured using BiCMOS process. They may come as QFN package. Some models feature I²C or SPI serial bus communications interface for I/O.
Some models feature a low-dropout regulator (LDO), and a real-time clock (RTC) co-operating with a backup battery.
A PMIC can use pulse-frequency modulation (PFM) and pulse-width modulation (PWM). It can use switching amplifier (Class-D electronic amplifier).
== IC manufacturers ==
Some of many manufacturers of PMICs:
== See also ==
Power cycle (power supplies)
Power electronics
Power management unit (PMU)
Power ramp
Quick charge
System basis chip (SBC)
System management controller (SMC)
== References == | Wikipedia/Power_management_integrated_circuit |
Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view.
== Generation and transfer of heat ==
=== Fourier's law ===
At macroscopic level, Fourier's law states a relation between the transmitted heat per unit time per unit area and the gradient of temperature:
q
=
−
κ
∇
T
{\displaystyle q=-\kappa \nabla T}
Where
κ
{\displaystyle \kappa }
is the thermal conductivity, [W·m−1 K−1].
=== Joule heating ===
Electronic systems work based on current and voltage signals. Current is the flow of charged particles through the material and these particles (electrons or holes), interact with the lattice of the crystal losing its energy which is released in form of heat. Joule Heating is a predominant mechanism for heat generation in integrated circuits and is an undesired effect in most of the cases. For an ohmic material, it has the form:
Q
=
j
2
ρ
{\displaystyle Q=j^{2}\rho }
Where
j
{\displaystyle j}
is the current density in [A·m−2],
ρ
{\displaystyle \rho }
is the specific electric resistivity in [
Ω
{\displaystyle {\Omega }}
·m] and
Q
{\displaystyle Q}
is the generated heat per unit volume in [W·m−3].
=== Heat-transfer equation ===
The governing equation of the physics of the heat transfer problem relates the flux of heat in space, its variation in time and the generation of power by the following expression:
∇
(
κ
(
T
)
∇
T
)
+
g
=
ρ
C
∂
T
∂
t
{\displaystyle \nabla \left(\kappa \left(T\right)\nabla T\right)+g=\rho C{\frac {\partial T}{\partial t}}}
Where
κ
{\displaystyle \kappa }
is the thermal conductivity,
ρ
{\displaystyle \rho }
is the density of the medium,
C
{\displaystyle C}
is the specific heat,
α
=
κ
ρ
C
{\displaystyle \alpha ={\frac {\kappa }{\rho C}}}
, the thermal diffusivity and
g
{\displaystyle g}
is the rate of heat generation per unit volume. Heat diffuses from the source following the above equation and solution in an homogeneous medium follows a Gaussian distribution.
== Techniques to solve heat equation ==
=== Kirchhoff transformation ===
To get rid of the temperature dependence of
κ
{\displaystyle \kappa }
, Kirchhoff transformation can be performed
θ
=
T
s
+
1
κ
s
∫
T
s
T
κ
(
T
)
d
T
{\displaystyle \theta =T_{s}+{\frac {1}{\kappa _{s}}}\int _{T_{s}}^{T}\kappa (T)dT}
where
κ
s
=
κ
(
T
s
)
{\displaystyle \kappa _{s}=\kappa \left(T_{s}\right)}
and
T
s
{\displaystyle T_{s}}
is the heat sink temperature. When applying this transformation, the heat equation becomes:
α
∇
2
θ
+
α
κ
s
g
=
∂
θ
∂
t
{\displaystyle \alpha \nabla ^{2}\theta +{\frac {\alpha }{\kappa _{s}}}g={\frac {\partial \theta }{\partial t}}}
where
α
=
κ
ρ
C
{\displaystyle \alpha ={\frac {\kappa }{\rho C}}}
is called the diffusivity, which also depends on the temperature. To completely linearize the equation, a second transformation is employed:
α
s
τ
=
∫
0
t
α
(
θ
)
d
t
{\displaystyle \alpha _{s}\tau =\int _{0}^{t}\alpha (\theta )dt}
yielding the expression:
∇
2
θ
−
1
α
s
∂
θ
∂
τ
=
−
g
κ
s
{\displaystyle \nabla ^{2}\theta -{\frac {1}{\alpha _{s}}}{\frac {\partial \theta }{\partial \tau }}=-{\frac {g}{\kappa _{s}}}}
Simple, direct application of this equation requires approximation. Additional terms arising in the transformed Laplacian are dropped, leaving the Laplacian in its conventional form.
=== Analytical solutions ===
Although analytical solutions can only be found for specific and simple cases, they give a good insight to deal with more complex situations. Analytical solutions for regular subsystems can also be combined to provide detailed descriptions of complex structures. In Prof. Batty's work, a Fourier series expansion to the temperature in the Laplace domain is introduced to find the solution to the linearized heat equation.
==== Example ====
This procedure can be applied to a simple but nontrivial case: an homogeneous cube die made out of GaAs, L=300 um. The goal is to find the temperature distribution on the top surface. The top surface is discretized into smaller squares with index i=1...N. One of them is considered to be the source.
Taking the Laplace transform to the heat equation:
∇
2
Θ
¯
−
s
k
s
Θ
¯
=
0
{\displaystyle \nabla ^{2}{\bar {\Theta }}-{\frac {s}{k_{s}}}{\bar {\Theta }}=0}
where
Θ
¯
=
s
θ
−
θ
(
τ
=
0
)
{\displaystyle {\overline {\Theta }}=s\theta -\theta \left(\tau =0\right)}
Function
Θ
¯
{\displaystyle {\overline {\Theta }}}
is expanded in terms of cosine functions for the
x
{\displaystyle x}
and
y
{\displaystyle y}
variables and in terms of hyperbolic cosines and sines for
z
{\displaystyle z}
variable. Next, by applying adiabatic boundary conditions at the lateral walls and fix temperature at the bottom (heat sink temperature), thermal impedance matrix equation is derived:
Δ
θ
i
=
∑
j
=
1
N
R
T
H
i
j
(
t
)
P
j
(
t
)
{\displaystyle \Delta \theta _{i}=\sum _{j=1}^{N}R_{TH_{ij}}(t)P_{j}(t)}
Where the index
j
{\displaystyle j}
accounts for the power sources, while the index
i
{\displaystyle i}
refers to each small area.
For more details about the derivation, please see Prof. Batty's paper,.
The below figure shows the steady state temperature distribution of this analytical method for a cubic die, with dimensions 300 um. A constant power source of 0.3W is applied over a central surface of dimension 0.1L x 0.1L. As expected, the distribution decays as it approaches to the boundaries, its maximum is located at the center and almost reaches 400K
=== Numerical solutions ===
Numerical solutions use a mesh of the structure to perform the simulation. The most popular methods are: Finite difference time-domain (FDTD) method, Finite element method (FEM) and method of moments (MoM).
The finite-difference time-domain (FDTD) method is a robust and popular technique that consists in solving differential equations numerically as well as certain boundary conditions defined by the problem. This is done by discretizing the space and time, and using finite differencing formulas, thus the partial differential equations that describe the physics of the problem can be solved numerically by computer programs.
The FEM is also a numerical scheme employed to solve engineering and mathematical problems described by differential equations as well as boundary conditions. It discretizes the space into smaller elements for which basis functions are assigned to their nodes or edges. Basis functions are linear or higher order polynomials. Applying the differential equation and the boundary conditions of the problem to the basis functions, a system of equations is formulated using either the Ritz or Galerkin method. Finally, a direct or iterative method is employed to solve the system of linear equations. For the thermal case, FEM method is more suitable due to the nonlinearity nature of the thermal properties.
==== Example ====
The previous example can be solved with a numerical method. For this case, the cube can by discretized into rectangular elements. Its basis functions can be chosen to be a first order approximation (linear):
N
i
e
=
1
2
ξ
(
1
∓
ζ
)
,
i
=
1
,
4
{\displaystyle N_{i}^{e}={\frac {1}{2}}\xi \left(1\mp \zeta \right),\qquad i=1,4}
N
i
e
=
1
2
η
(
1
∓
ζ
)
,
i
=
2
,
5
{\displaystyle N_{i}^{e}={\frac {1}{2}}\eta \left(1\mp \zeta \right),\qquad i=2,5}
N
i
e
=
1
2
(
1
−
ξ
−
η
)
(
1
∓
ζ
)
,
i
=
3
,
6
{\displaystyle N_{i}^{e}={\frac {1}{2}}\left(1-\xi -\eta \right)\left(1\mp \zeta \right),\qquad i=3,6}
where
ζ
=
2
(
z
−
z
c
)
/
h
z
{\displaystyle \zeta =2(z-z_{c})/h_{z}}
. If
z
c
=
0
{\displaystyle z_{c}=0}
, then
ζ
=
2
z
/
h
z
{\displaystyle \zeta =2z/h_{z}}
.
Using this basis functions and after applying Galerkin's method to the heat transfer equation, a matrix equation is obtained:
[
S
]
{
θ
}
+
[
R
]
d
d
t
{
θ
}
=
{
B
}
{\displaystyle \left[S\right]\left\{\theta \right\}+\left[R\right]{\frac {d}{dt}}\left\{\theta \right\}=\left\{B\right\}}
where,
R
i
j
=
∫
v
N
j
N
i
d
V
{\displaystyle R_{ij}=\int _{v}N_{j}N_{i}dV}
S
i
j
=
k
∫
v
∇
N
j
.
∇
N
i
d
V
{\displaystyle S_{ij}=k\int _{v}\nabla N_{j}.\nabla N_{i}dV}
B
i
=
k
κ
s
∫
Ω
1
N
i
p
(
x
,
y
)
d
Ω
+
k
κ
s
∫
v
N
i
g
d
V
−
k
T
o
∑
j
=
0
N
D
∫
v
∇
N
j
D
.
∇
N
i
d
V
{\displaystyle B_{i}={\frac {k}{\kappa _{s}}}\int _{\Omega _{1}}N_{i}p(x,y)d\Omega +{\frac {k}{\kappa _{s}}}\int _{v}N_{i}gdV-kT_{o}\sum _{j=0}^{N_{D}}\int _{v}\nabla N_{j}^{D}.\nabla N_{i}dV}
.
This expressions can be evaluated by using a simple FEM code. For more details, please see. The figure below shows the temperature distribution for the numerical solution case. This solution shows very good agreement with the analytical case, its peak also reaches 390 K at the center. The apparent lack of smoothness of the distribution comes from the first order approximation of the basis functions and this can be solved by using higher order basis functions. Also, better results might be obtained by employing a denser mesh of the structure; however, for very dense meshes the computation time increases a lot, making the simulation non-practical.
The next figure shows a comparison of the peak temperature as a function of time for both methods. The system reaches steady state in approximately
1
m
s
{\displaystyle 1ms}
.
=== Model order reduction ===
The numerical methods such as FEM or FDM derive a matrix equation as shown in the previous section. To solve this equation faster, a method called Model order reduction can be employed to find an approximation of lower order. This method is based on the fact that a high-dimensional state vector belongs to a low-dimensional subspace [1].
Figure below shows the concept of the MOR approximation: finding matrix V, the dimension of the system can be reduced to solve a simplified system.
Therefore, the original system of equation:
C
{
x
}
′
+
K
{
x
}
=
F
{
u
}
{\displaystyle C\left\{x\right\}'+K\left\{x\right\}=F\left\{u\right\}}
becomes:
V
T
C
V
{
z
}
′
+
V
T
K
V
{
z
}
=
V
T
F
{
u
}
{\displaystyle V^{T}CV\left\{z\right\}'+V^{T}KV\left\{z\right\}=V^{T}F\left\{u\right\}}
Whose order is much lower than the original making the computation much less expensive. Once the solution is obtained, the original vector is found by taking the product with V.
== Conclusion ==
The generation of heat is mainly produced by joule heating, this undesired effect has limited the performance of integrated circuits. In the preset article heat conduction was described and analytical and numerical methods to solve a heat transfer problem were presented. Using these methods, steady state temperature distribution was computed as well as the peak temperature as a function of time for a cubic die. For an input power of
0.3
W
{\displaystyle 0.3W}
(or
3.333
e
8
W
/
m
2
{\displaystyle 3.333e8W/m_{2}}
) applied over a single surface source on the top of a cubic die a peak increment of temperature in the order of 100 K was computed. Such increase in temperature can affect the behavior of surrounding semiconductor devices. Important parameters like mobility change drastically. That is why the heat dissipation is a relevant issue and must be considered for circuit designing.
== See also ==
Heat generation in integrated circuits
== References == | Wikipedia/Thermal_simulations_for_integrated_circuits |
Integrated circuit packaging is the final stage of semiconductor device fabrication, in which the die is encapsulated in a supporting case that prevents physical damage and corrosion. The case, known as a "package", supports the electrical contacts which connect the device to a circuit board.
The packaging stage is followed by testing of the integrated circuit.
== Design considerations ==
=== Electrical ===
The current-carrying traces that run out of the die, through the package, and into the printed circuit board (PCB) have very different electrical properties compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself. Therefore, it is important that the materials used as electrical contacts exhibit characteristics like low resistance, low capacitance and low inductance. Both the structure and materials must prioritize signal transmission properties, while minimizing any parasitic elements that could negatively affect the signal.
Controlling these characteristics is becoming increasingly important as the rest of technology begins to speed up. Packaging delays have the potential to make up almost half of a high-performance computer's delay, and this bottleneck on speed is expected to increase.
=== Mechanical and thermal ===
The integrated circuit package must resist physical breakage, keep out moisture, and also provide effective heat dissipation from the chip. Moreover, for RF applications, the package is commonly required to shield electromagnetic interference, that may either degrade the circuit performance or adversely affect neighboring circuits. Finally, the package must permit interconnecting the chip to a PCB. The materials of the package are either plastic (thermoset or thermoplastic), metal (commonly Kovar) or ceramic. A common plastic used for this is epoxy-cresol-novolak (ECN). All three material types offer usable mechanical strength, moisture and heat resistance. Nevertheless, for higher-end devices, metallic and ceramic packages are commonly preferred due to their higher strength (which also supports higher pin-count designs), heat dissipation, hermetic performance, or other reasons. Generally, ceramic packages are more expensive than similar plastic packages.
Some packages have metallic fins to enhance heat transfer, but these take up space. Larger packages also allow for more interconnecting pins.
=== Economic ===
Cost is a factor in selection of integrated circuit packaging. Typically, an inexpensive plastic package can dissipate heat up to 2W, which is sufficient for many simple applications, though a similar ceramic package can dissipate up to 50W in the same scenario. As the chips inside the package get smaller and faster, they also tend to get hotter. As the subsequent need for more effective heat dissipation increases, the cost of packaging rises along with it. Generally, the smaller and more complex the package needs to be, the more expensive it is to manufacture. Wire bonding can be used instead of techniques such as flip-chip to reduce costs.
== History ==
Early integrated circuits were packaged in ceramic flat packs, which the military used for many years for their reliability and small size. The other type of packaging used in the 1970s, called the ICP (Integrated Circuit Package), was a ceramic package (sometimes round as the transistor package), with the leads on one side, co-axially with the package axis.
Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s VLSI pin counts exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by small-outline integrated circuit—a carrier which occupies an area about 30–50% less than an equivalent DIP, with a typical thickness that is 70% less.The next big innovation was the area array package, which places the interconnection terminals throughout the surface area of the package, providing a greater number of connections than previous package types where only the outer perimeter is used. The first area array package was a ceramic pin grid array package. Not long after, the plastic ball grid array (BGA), another type of area array package, became one of the most commonly used packaging techniques.
In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline packages (TSOP) replaced PGA packages as the most common for high pin count devices, though PGA packages are still often used for microprocessors. However, industry leaders Intel and AMD transitioned in the 2000s from PGA packages to land grid array (LGA) packages.
Ball grid array (BGA) packages have existed since the 1970s, but evolved into flip-chip ball grid array (FCBGA) packages in the 1990s. FCBGA packages allow for much higher pin count than any existing package types. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. Ceramic subtrates for BGA were replaced with organic substrates to reduce costs and use existing PCB manufacturing techniques to produce more packages at a time by using larger PCB panels during manufacturing.
Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.
Recent developments consist of stacking multiple dies in single package called SiP, for System In Package, or three-dimensional integrated circuit. Combining multiple dies on a small substrate, often ceramic, is called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes blurry.
== Common package types ==
Through-hole technology
Surface-mount technology
Chip carrier
Pin grid array
Flat package
Small Outline Integrated Circuit
Chip-scale package
Ball grid array
Transistor, diode, small pin count IC packages
Multi-chip packages
== Operations ==
For traditional ICs, after wafer dicing, the die is picked from the diced wafer using a vacuum tip or suction cup and undergoes die attachment which is the step during which a die is mounted and fixed to the package or support structure (header). In high-powered applications, the die is usually eutectic bonded onto the package, using e.g. gold-tin or gold-silicon solder (for good heat conduction). For low-cost, low-powered applications, the die is often glued directly onto a substrate (such as a printed wiring board) using an epoxy adhesive.
Alternatively dies can be attached using solder. These techniques are usually used when the die will be wire bonded; dies with flip chip technology do not use these attachment techniques.
IC bonding is also known as die bonding, die attach, and die mount.
The following operations are performed at the packaging stage, as broken down into bonding, encapsulation, and wafer bonding steps. Note that this list is not all-inclusive and not all of these operations are performed for every package, as the process is highly dependent on the package type.
IC bonding
Wire bonding
Thermosonic bonding
Down bonding
Tape automated bonding
Flip chip
Quilt packaging
Film attaching
Spacer attaching
Sintering die attach
IC encapsulation
Baking
Plating
Lasermarking
Trim and form
Wafer bonding
Sintering die attach is a process that involves placing the semiconductor die onto the substrate and then subjecting it to high temperature and pressure in a controlled environment.
== See also ==
Advanced packaging (semiconductors)
List of electronic component packaging types
List of electronics package dimensions
Gold–aluminium intermetallic "purple plague"
Co-fired ceramic
B-staging
Potting (electronics)
Quilt packaging
Electronic packaging
Decapping
== References == | Wikipedia/Integrated_circuit_packaging |
The MOSFET (metal–oxide–semiconductor field-effect transistor) is a type of insulated-gate field-effect transistor (IGFET) that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the covered gate determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals.
The MOSFET is the basic building block of most modern electronics, and the most frequently manufactured device in history, with an estimated total of 13 sextillion (1.3 × 1022) MOSFETs manufactured between 1960 and 2018. It is the most common semiconductor device in digital and analog circuits, and the most common power device. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. MOSFET scaling and miniaturization has been driving the rapid exponential growth of electronic semiconductor technology since the 1960s, and enable high-density integrated circuits (ICs) such as memory chips and microprocessors.
MOSFETs in integrated circuits are the primary elements of computer processors, semiconductor memory, image sensors, and most other types of integrated circuits. Discrete MOSFET devices are widely used in applications such as switch mode power supplies, variable-frequency drives, and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement, and home and automobile sound systems.
== Integrated circuits ==
The MOSFET, invented by a Bell Labs team under Mohamed Atalla and Dawon Kahng between 1959 and 1960, is the most widely used type of transistor and the most critical device component in integrated circuit (IC) chips. Planar process, developed by Jean Hoerni at Fairchild Semiconductor in early 1959, was also critical to the invention of the monolithic integrated circuit chip by Robert Noyce later in 1959. This was followed by the development of clean rooms to reduce contamination to levels never before thought necessary, and coincided with the development of photolithography which, along with surface passivation and the planar process, allowed circuits to be made in few steps.
Atalla realised that the main advantage of a MOS transistor was its ease of fabrication, particularly suiting it for use in the recently invented integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was re-iterated by Dawon Kahng in 1961. The Si–SiO2 system possessed the technical attractions of low cost of production (on a per circuit basis) and ease of integration. These two factors, along with its rapidly scaling miniaturization and low energy consumption, led to the MOSFET becoming the most widely used type of transistor in IC chips.
The earliest experimental MOS IC to be demonstrated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuits in 1964, consisting of 120 p-channel transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. In 1967, Bell Labs researchers Robert Kerwin, Donald Klein and John Sarace developed the self-aligned gate (silicon-gate) MOS transistor, which Fairchild Semiconductor researchers Federico Faggin and Tom Klein used to develop the first silicon-gate MOS IC.
=== Chips ===
There are various different types of MOS IC chips, which include the following.
=== Large-scale integration ===
With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density IC chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of MOSFETs on a chip by the late 1960s. MOS technology enabled the integration of more than 10,000 transistors on a single LSI chip by the early 1970s, before later enabling very large-scale integration (VLSI).
=== Microprocessors ===
The MOSFET is the basis of every microprocessor, and was responsible for the invention of the microprocessor. The origins of both the microprocessor and the microcontroller can be traced back to the invention and development of MOS technology. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.
The earliest microprocessors were all MOS chips, built with MOS LSI circuits. The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first commercial single-chip microprocessor, the Intel 4004, was developed by Federico Faggin, using his silicon-gate MOS IC technology, with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. With the arrival of CMOS microprocessors in 1975, the term "MOS microprocessors" began to refer to chips fabricated entirely from PMOS logic or fabricated entirely from NMOS logic, contrasted with "CMOS microprocessors" and "bipolar bit-slice processors".
== CMOS circuits ==
Complementary metal–oxide–semiconductor (CMOS) logic was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. CMOS had lower power consumption, but was initially slower than NMOS, which was more widely used for computers in the 1970s. In 1978, Hitachi introduced the twin-well CMOS process, which allowed CMOS to match the performance of NMOS with less power consumption. The twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computers in the 1980s. By the 1980s CMOS logic consumed over 7 times less power than NMOS logic, and about 100,000 times less power than bipolar transistor-transistor logic (TTL).
=== Digital ===
The growth of digital technologies like the microprocessor has provided the motivation to advance MOSFET technology faster than any other type of silicon-based transistor. A big advantage of MOSFETs for digital switching is that the oxide layer between the gate and the channel prevents DC current from flowing through the gate, further reducing power consumption and giving a very large input impedance. The insulating oxide between the gate and channel effectively isolates a MOSFET in one logic stage from earlier and later stages, which allows a single MOSFET output to drive a considerable number of MOSFET inputs. Bipolar transistor-based logic (such as TTL) does not have such a high fanout capacity. This isolation also makes it easier for the designers to ignore to some extent loading effects between logic stages independently. That extent is defined by the operating frequency: as frequencies increase, the input impedance of the MOSFETs decreases.
=== Analog ===
The MOSFET's advantages in digital circuits do not translate into supremacy in all analog circuits. The two types of circuit draw upon different features of transistor behavior. Digital circuits switch, spending most of their time either fully on or fully off. The transition from one to the other is only of concern with regards to speed and charge required. Analog circuits depend on operation in the transition region where small changes to Vgs can modulate the output (drain) current. The JFET and bipolar junction transistor (BJT) are preferred for accurate matching (of adjacent devices in integrated circuits), higher transconductance and certain temperature characteristics which simplify keeping performance predictable as circuit temperature varies.
Nevertheless, MOSFETs are widely used in many types of analog circuits because of their own advantages (zero gate current, high and adjustable output impedance and improved robustness vs. BJTs which can be permanently degraded by even lightly breaking down the emitter-base). The characteristics and performance of many analog circuits can be scaled up or down by changing the sizes (length and width) of the MOSFETs used. By comparison, in bipolar transistors the size of the device does not significantly affect its performance. MOSFETs' ideal characteristics regarding gate current (zero) and drain-source offset voltage (zero) also make them nearly ideal switch elements, and also make switched capacitor analog circuits practical. In their linear region, MOSFETs can be used as precision resistors, which can have a much higher controlled resistance than BJTs. In high power circuits, MOSFETs sometimes have the advantage of not suffering from thermal runaway as BJTs do. Also, MOSFETs can be configured to perform as capacitors and gyrator circuits which allow op-amps made from them to appear as inductors, thereby allowing all of the normal analog devices on a chip (except for diodes, which can be made smaller than a MOSFET anyway) to be built entirely out of MOSFETs. This means that complete analog circuits can be made on a silicon chip in a much smaller space and with simpler fabrication techniques. MOSFETS are ideally suited to switch inductive loads because of tolerance to inductive kickback.
Some ICs combine analog and digital MOSFET circuitry on a single mixed-signal integrated circuit, making the needed board space even smaller. This creates a need to isolate the analog circuits from the digital circuits on a chip level, leading to the use of isolation rings and silicon on insulator (SOI). Since MOSFETs require more space to handle a given amount of power than a BJT, fabrication processes can incorporate BJTs and MOSFETs into a single device. Mixed-transistor devices are called bi-FETs (bipolar FETs) if they contain just one BJT-FET and BiCMOS (bipolar-CMOS) if they contain complementary BJT-FETs. Such devices have the advantages of both insulated gates and higher current density.
=== RF CMOS ===
In the late 1980s, Asad Abidi pioneered RF CMOS technology, which uses MOS VLSI circuits, while working at UCLA. This changed the way in which RF circuits were designed, away from discrete bipolar transistors and towards CMOS integrated circuits. As of 2008, the radio transceivers in all wireless networking devices and modern mobile phones are mass-produced as RF CMOS devices. RF CMOS is also used in nearly all modern Bluetooth and wireless LAN (WLAN) devices.
== Analog switches ==
MOSFET analog switches use the MOSFET to pass analog signals when on, and as a high impedance when off. Signals flow in both directions across a MOSFET switch. In this application, the drain and source of a MOSFET exchange places depending on the relative voltages of the source/drain electrodes. The source is the more negative side for an N-MOS or the more positive side for a P-MOS. All of these switches are limited on what signals they can pass or stop by their gate–source, gate–drain, and source–drain voltages; exceeding the voltage, current, or power limits will potentially damage the switch.
=== Single-type ===
This analog switch uses a four-terminal simple MOSFET of either P or N type.
In the case of an n-type switch, the body is connected to the most negative supply (usually GND) and the gate is used as the switch control. Whenever the gate voltage exceeds the source voltage by at least a threshold voltage, the MOSFET conducts. The higher the voltage, the more the MOSFET can conduct. An N-MOS switch passes all voltages less than Vgate − Vtn. When the switch is conducting, it typically operates in the linear (or ohmic) mode of operation, since the source and drain voltages will typically be nearly equal.
In the case of a P-MOS, the body is connected to the most positive voltage, and the gate is brought to a lower potential to turn the switch on. The P-MOS switch passes all voltages higher than Vgate − Vtp (threshold voltage Vtp is negative in the case of enhancement-mode P-MOS).
=== Dual-type (CMOS) ===
This "complementary" or CMOS type of switch uses one P-MOS and one N-MOS FET to counteract the limitations of the single-type switch. The FETs have their drains and sources connected in parallel, the body of the P-MOS is connected to the high potential (VDD) and the body of the N-MOS is connected to the low potential (gnd). To turn the switch on, the gate of the P-MOS is driven to the low potential and the gate of the N-MOS is driven to the high potential. For voltages between VDD − Vtn and gnd − Vtp, both FETs conduct the signal; for voltages less than gnd − Vtp, the N-MOS conducts alone; and for voltages greater than VDD − Vtn, the P-MOS conducts alone.
The voltage limits for this switch are the gate–source, gate–drain and source–drain voltage limits for both FETs. Also, the P-MOS is typically two to three times wider than the N-MOS, so the switch will be balanced for speed in the two directions.
Tri-state circuitry sometimes incorporates a CMOS MOSFET switch on its output to provide for a low-ohmic, full-range output when on, and a high-ohmic, mid-level signal when off.
== MOS memory ==
The advent of the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores in computer memory. The first modern computer memory was introduced in 1965, when John Schmidt at Fairchild Semiconductor designed the first MOS semiconductor memory, a 64-bit MOS SRAM (static random-access memory). SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data.
MOS technology is the basis for DRAM (dynamic random-access memory). In 1966, Dr. Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM (dynamic random-access memory) memory cell, based on MOS technology. MOS memory enabled higher performance, was cheaper, and consumed less power, than magnetic-core memory, leading to MOS memory overtaking magnetic core memory as the dominant computer memory technology by the early 1970s.
Frank Wanlass, while studying MOSFET structures in 1963, noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM (erasable programmable read-only memory) technology. In 1967, Dawon Kahng and Simon Sze proposed that floating-gate memory cells, consisting of floating-gate MOSFETs (FGMOS), could be used to produce reprogrammable ROM (read-only memory). Floating-gate memory cells later became the basis for non-volatile memory (NVM) technologies including EPROM, EEPROM (electrically erasable programmable ROM) and flash memory.
=== Types of MOS memory ===
There are various different types of MOS memory. The following list includes various different MOS memory types.
== MOS sensors ==
A number of MOSFET sensors have been developed, for measuring physical, chemical, biological and environmental parameters. The earliest MOSFET sensors include the open-gate FET (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode.
By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.
The two main types of image sensors used in digital imaging technology are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on MOS technology, with the CCD based on MOS capacitors and the CMOS sensor based on MOS transistors.
=== Image sensors ===
MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.
The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The CMOS active-pixel sensor was later developed by Eric Fossum and his team at NASA's Jet Propulsion Laboratory in the early 1990s.
MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5 μm NMOS sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors.
=== Other sensors ===
MOS sensors, also known as MOSFET sensors, are widely used to measure physical, chemical, biological and environmental parameters. The ion-sensitive field-effect transistor (ISFET), for example, is widely used in biomedical applications.
MOSFETs are also widely used in microelectromechanical systems (MEMS), as silicon MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965.
Common applications of other MOS sensors include the following.
== Power MOSFET ==
The power MOSFET, which is commonly used in power electronics, was developed in the early 1970s. The power MOSFET enables low gate drive power, fast switching speed, and advanced paralleling capability.
The power MOSFET is the most widely used power device in the world. Advantages over bipolar junction transistors in power electronics include MOSFETs not requiring a continuous flow of drive current to remain in the ON state, offering higher switching speeds, lower switching power losses, lower on-resistances, and reduced susceptibility to thermal runaway. The power MOSFET had an impact on power supplies, enabling higher operating frequencies, size and weight reduction, and increased volume production.
Switching power supplies are the most common applications for power MOSFETs. They are also widely used for MOS RF power amplifiers, which enabled the transition of mobile networks from analog to digital in the 1990s. This led to the wide proliferation of wireless mobile networks, which revolutionised telecommunications systems. The LDMOS in particular is the most widely used power amplifier in mobile networks such as 2G, 3G, 4G and 5G, as well as broadcasting and amateur radio. Over 50 billion discrete power MOSFETs are shipped annually, as of 2018. They are widely used for automotive, industrial and communications systems in particular. Power MOSFETs are commonly used in automotive electronics, particularly as switching devices in electronic control units, and as power converters in modern electric vehicles. The insulated-gate bipolar transistor (IGBT), a hybrid MOS-bipolar transistor, is also used for a wide variety of applications.
LDMOS, a power MOSFET with lateral structure, is commonly used in high-end audio amplifiers and high-power PA systems. Their advantage is a better behaviour in the saturated region (corresponding to the linear region of a bipolar transistor) than the vertical MOSFETs. Vertical MOSFETs are designed for switching applications.
=== DMOS and VMOS ===
Power MOSFETs, including DMOS, LDMOS and VMOS devices, are commonly used for a wide range of other applications, which include the following.
=== RF DMOS ===
RF DMOS, also known as RF power MOSFET, is a type of DMOS power transistor designed for radio-frequency (RF) applications. It is used in various radio and RF applications, which include the following.
== Consumer electronics ==
MOSFETs are fundamental to the consumer electronics industry. According to Colinge, numerous consumer electronics would not exist without the MOSFET, such as digital wristwatches, pocket calculators, and video games, for example.
MOSFETs are commonly used for a wide range of consumer electronics, which include the following devices listed. Computers or telecommunication devices (such as phones) are not included here, but are listed separately in the Information and communications technology (ICT) section below.
=== Pocket calculators ===
One of the earliest influential consumer electronic products enabled by MOS LSI circuits was the electronic pocket calculator, as MOS LSI technology enabled large amounts of computational capability in small packages. In 1965, the Victor 3900 desktop calculator was the first MOS LSI calculator, with 29 MOS LSI chips. In 1967 the Texas Instruments Cal-Tech was the first prototype electronic handheld calculator, with three MOS LSI chips, and it was later released as the Canon Pocketronic in 1970. The Sharp QT-8D desktop calculator was the first mass-produced LSI MOS calculator in 1969, and the Sharp EL-8 which used four MOS LSI chips was the first commercial electronic handheld calculator in 1970. The first true electronic pocket calculator was the Busicom LE-120A HANDY LE, which used a single MOS LSI calculator-on-a-chip from Mostek, and was released in 1971. By 1972, MOS LSI circuits were commercialized for numerous other applications.
=== Audio-visual (AV) media ===
MOSFETs are commonly used for a wide range of audio-visual (AV) media technologies, which include the following list of applications.
=== Power MOSFET applications ===
Power MOSFETs are commonly used for a wide range of consumer electronics. Power MOSFETs are widely used in the following consumer applications.
== Information and communications technology (ICT) ==
MOSFETs are fundamental to information and communications technology (ICT), including modern computers, modern computing, telecommunications, the communications infrastructure, the Internet, digital telephony, wireless telecommunications, and mobile networks. According to Colinge, the modern computer industry and digital telecommunication systems would not exist without the MOSFET. Advances in MOS technology has been the most important contributing factor in the rapid rise of network bandwidth in telecommunication networks, with bandwidth doubling every 18 months, from bits per second to terabits per second (Edholm's law).
=== Computers ===
MOSFETs are commonly used in a wide range of computers and computing applications, which include the following.
=== Telecommunications ===
MOSFETs are commonly used in a wide range of telecommunications, which include the following applications.
=== Power MOSFET applications ===
== Insulated-gate bipolar transistor (IGBT) ==
The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). As of 2010, the IGBT is the second most widely used power transistor, after the power MOSFET. The IGBT accounts for 27% of the power transistor market, second only to the power MOSFET (53%), and ahead of the RF amplifier (11%) and bipolar junction transistor (9%). The IGBT is widely used in consumer electronics, industrial technology, the energy sector, aerospace electronic devices, and transportation.
The IGBT is widely used in the following applications.
== Quantum physics ==
=== 2D electron gas and quantum Hall effect ===
In quantum physics and quantum mechanics, the MOSFET is the basis for two-dimensional electron gas (2DEG) and the quantum Hall effect. The MOSFET enables physicists to study electron behavior in a two-dimensional gas, called a two-dimensional electron gas. In a MOSFET, conduction electrons travel in a thin surface layer, and a "gate" voltage controls the number of charge carriers in this layer. This allows researchers to explore quantum effects by operating high-purity MOSFETs at liquid helium temperatures.
In 1978, the Gakushuin University researchers Jun-ichi Wakabayashi and Shinji Kawaji observed the Hall effect in experiments carried out on the inversion layer of MOSFETs. In 1980, Klaus von Klitzing, working at the high magnetic field laboratory in Grenoble with silicon-based MOSFET samples developed by Michael Pepper and Gerhard Dorda, made the unexpected discovery of the quantum Hall effect.
=== Quantum technology ===
The MOSFET is used in quantum technology. A quantum field-effect transistor (QFET) or quantum well field-effect transistor (QWFET) is a type of MOSFET that takes advantage of quantum tunneling to greatly increase the speed of transistor operation.
== Transportation ==
MOSFETs are widely used in transportation. For example, they are commonly used for automotive electronics in the automotive industry. MOS technology is commonly used for a wide range of vehicles and transportation, which include the following applications.
=== Automotive industry ===
MOSFETs are widely used in the automotive industry, particularly for automotive electronics in motor vehicles. Automotive applications include the following.
=== Power MOSFET applications ===
Power MOSFETs are widely used in transportation technology, which includes the following vehicles.
In the automotive industry, power MOSFETs are widely used in automotive electronics, which include the following.
=== IGBT applications ===
The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). IGBTs are widely used in the following transportation applications.
=== Space industry ===
In the space industry, MOSFET devices were adopted by NASA for space research in 1964, for its Interplanetary Monitoring Platform (IMP) program and Explorers space exploration program. The use of MOSFETs was a major step forward in the electronics design of spacecraft and satellites. The IMP D (Explorer 33), launched in 1966, was the first spacecraft to use the MOSFET. Data gathered by IMP spacecraft and satellites were used to support the Apollo program, enabling the first crewed Moon landing with the Apollo 11 mission in 1969.
The Cassini–Huygens to Saturn in 1997 had spacecraft power distribution accomplished 192 solid-state power switch (SSPS) devices, which also functioned as circuit breakers in the event of an overload condition. The switches were developed from a combination of two semiconductor devices with switching capabilities: the MOSFET and the ASIC (application-specific integrated circuit). This combination resulted in advanced power switches that had better performance characteristics than traditional mechanical switches.
== Other applications ==
MOSFETs are commonly used for a wide range of other applications, which include the following.
== References == | Wikipedia/MOS_integrated_circuit |
In computer graphics, graphics software refers to a program or collection of programs that enable a person to manipulate images or models visually on a computer.
Computer graphics can be classified into two distinct categories: raster graphics and vector graphics, with further 2D and 3D variants. Many graphics programs focus exclusively on either vector or raster graphics, but there are a few that operate on both. It is simple to convert from vector graphics to raster graphics, but going the other way is harder. Some software attempts to do this.
In addition to static graphics, there are animation and video editing software. Different types of software are often designed to edit different types of graphics such as video, photos, and vector-based drawings. The exact sources of graphics may vary for different tasks, but most can read and write files.
Most graphics programs have the ability to import and export one or more graphics file formats, including those formats written for a particular computer graphics program. Such programs include, but are not limited to: GIMP, Adobe Photoshop, CorelDRAW, Microsoft Publisher, Picasa, etc.
The use of a swatch is a palette of active colours that are selected and rearranged by the preference of the user. A swatch may be used in a program or be part of the universal palette on an operating system. It is used to change the colour of a text or image and in video editing. Vector graphics animation can be described as a series of mathematical transformations that are applied in sequence to one or more shapes in a scene. Raster graphics animation works in a similar fashion to film-based animation, where a series of still images produces the illusion of continuous movement.
== History ==
SuperPaint was one of the earliest graphics software applications, first conceptualized in 1972 and achieving its first stable image in 1973
Fauve Matisse (later Macromedia xRes) was a pioneering program of the early 1990s, notably introducing layers in customer software.
Currently Adobe Photoshop is one of the most used and best-known graphics programs in the Americas, having created more custom hardware solutions in the early 1990s, but was initially subject to various litigation. GIMP is a popular open-source alternative to Adobe Photoshop.
== See also ==
== References == | Wikipedia/Graphics_software |
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may be on the same device. A server host runs one or more server programs, which share their resources with clients. A client usually does not share its computing resources, but it requests content or service from a server and may share its own content as part of the request. Clients, therefore, initiate communication sessions with servers, which await incoming requests.
Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.
== Client and server role ==
The server component provides a function or service to one or many clients, which initiate requests for such services.
Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a service.
Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication.
== Client and server communication ==
Generally, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the relevant application protocol, i.e. the content and the formatting of the data for the requested service.
Clients and servers exchange messages in a request–response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange.
A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates.
Encryption should be applied if sensitive information is to be communicated between the client and the server.
== Example ==
When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials are compared against a database, and the webserver accesses that database server as a client. An application server interprets the returned data by applying the bank's business logic and provides the output to the webserver. Finally, the webserver returns the result to the client web browser for display.
In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete.
This example illustrates a design pattern applicable to the client–server model: separation of concerns.
== Server-side ==
Server-side refers to programs and operations that run on the server. This is in contrast to client-side programs and operations which run on the client.
=== General concepts ===
"Server-side software" refers to a computer application, such as a web server, that runs on remote server hardware, reachable from a user's local computer, smartphone, or other device. Operations may be performed server-side because they require access to information or functionality that is not available on the client, or because performing such operations on the client side would be slow, unreliable, or insecure.
Client and server programs may be commonly available ones such as free or commercial web servers and web browsers, communicating with each other using standardized protocols. Or, programmers may write their own server, client, and communications protocol which can only be used with one another.
Server-side operations include both those that are carried out in response to client requests, and non-client-oriented operations such as maintenance tasks.
=== Computer security ===
In a computer security context, server-side vulnerabilities or attacks refer to those that occur on a server computer system, rather than on the client side, or in between the two. For example, an attacker might exploit an SQL injection vulnerability in a web application in order to maliciously change or gain unauthorized access to data in the server's database. Alternatively, an attacker might break into a server system using vulnerabilities in the underlying operating system and then be able to access database and other files in the same manner as authorized administrators of the server.
=== Examples ===
In the case of distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, while the bulk of the operations occur on the client side, the servers are responsible for coordinating the clients, sending them data to analyze, receiving and storing results, providing reporting functionality to project administrators, etc. In the case of an Internet-dependent user application like Google Earth, while querying and display of map data takes place on the client side, the server is responsible for permanent storage of map data, resolving user queries into map data to be returned to the client, etc.
Web applications and services can be implemented in almost any language, as long as they can return data to standards-based web browsers (possibly via intermediary programs) in formats which they can use.
== Client side ==
Client-side refers to operations that are performed by the client in a computer network.
=== General concepts ===
Typically, a client is a computer application, such as a web browser, that runs on a user's local computer, smartphone, or other device, and connects to a server as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use less bandwidth, and incur a lesser security risk.
When the server serves data in a commonly used manner, for example according to standard protocols such as HTTP or FTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications, programmers may write their own server, client, and communications protocol which can only be used with one another.
Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations.
=== Computer security ===
In a computer security context, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on the server side, or in between the two. As an example, if a server contained an encrypted file or message which could only be decrypted using a key housed on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might cause malware to be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employ cross-site scripting vulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware.
=== Examples ===
Distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, as well as Internet-dependent applications like Google Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (a server-side operation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home).
== Early history ==
An early form of client–server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output.
While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and user-host (or using-host), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s.
One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet).
=== Client-host and server-host ===
Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and client-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving.
An early use of the word client occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client). By 1992, the word server had entered into general parlance.
== Centralized computing ==
The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a rich client, such as a personal computer, has many resources and does not rely on a server for essential functions.
As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to rich clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s.
== Comparison with peer-to-peer architecture ==
In addition to the client-server model, distributed computing applications often use the peer-to-peer (P2P) application architecture.
In the client-server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine.
Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.
In a peer-to-peer network, two or more computers (peers) pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client-server or client-queue-client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests.
Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.
== See also ==
== Notes == | Wikipedia/Client–server_model |
Integrated injection logic (IIL, I2L, or I2L) is a class of digital circuits built with multiple collector bipolar junction transistors (BJT). When introduced it had speed comparable to TTL yet was almost as low power as CMOS, making it ideal for use in VLSI (and larger) integrated circuits. The gates can be made smaller with this logic family than with CMOS because complementary transistors are not needed. Although the logic voltage levels are very close (High: 0.7V, Low: 0.2V), I2L has high noise immunity because it operates by current instead of voltage. I2L was developed in 1971 by Siegfried K. Wiedmann and Horst H. Berger who originally called it merged-transistor logic (MTL).
A disadvantage of this logic family is that the gates draw power when not switching unlike with CMOS.
== Construction ==
The I2L inverter gate is constructed with a PNP common base current source transistor and an NPN common emitter open collector inverter transistor (i.e. they are connected to the GND). On a wafer, these two transistors are merged. A small voltage (around 1 volts) is supplied to the emitter of the current source transistor to control the current supplied to the inverter transistor. Transistors are used for current sources on integrated circuits because they are much smaller than resistors.
Because the inverter is open collector, a wired AND operation may be performed by connecting an output from each of two or more gates together. Thus the fan-out of an output used in such a way is one. However, additional outputs may be produced by adding more collectors to the inverter transistor. The gates can be constructed very simply with just a single layer of interconnect metal.
In a discrete implementation of an I2L circuit, bipolar NPN transistors with multiple collectors can be replaced with multiple discrete 3-terminal NPN transistors connected in parallel having their bases connected together and their emitters connected likewise. The current source transistor may be replaced with a resistor from the positive supply to the base of the inverter transistor, since discrete resistors are smaller and less expensive than discrete transistors.
Similarly, the merged PNP current injector transistor and the NPN inverter transistor can be implemented as separate discrete components.
== Operation ==
The heart of an I2L circuit is the common emitter open collector inverter. Typically, an inverter consists of an NPN transistor with the emitter connected to ground and the base biased with a forward current from the current source. The input is supplied to the base as either a current sink (low logic level) or as a high-z floating condition (high logic level). The output of an inverter is at the collector. Likewise, it is either a current sink (low logic level) or a high-z floating condition (high logic level).
Like direct-coupled transistor logic, there is no resistor between the output (collector) of one NPN transistor and the input (base) of the following transistor.
To understand how the inverter operates, it is necessary to understand the current flow. If the bias current is shunted to ground (low logic level), the transistor turns off and the collector floats (high logic level). If the bias current is not shunted to ground because the input is high-z (high logic level), the bias current flows through the transistor to the emitter, switching on the transistor, and allowing the collector to sink current (low logic level). Because the output of the inverter can sink current but cannot source current, it is safe to connect the outputs of multiple inverters together to form a wired AND gate. When the outputs of two inverters are wired together, the result is a two-input NOR gate because the configuration (NOT A) AND (NOT B) is equivalent to NOT (A OR B) (per De Morgan's Theorem). Finally the output of the NOR gate is inverted by IIL inverter in upper right of the diagram, the result is a two-input OR gate.
Due to internal parasitic capacitance in transistors, higher currents sourced into the base of the inverter transistor result in faster switching speeds, and since the voltage difference between high and low logic levels is smaller for I2L than other bipolar logic families (around 0.5 volts instead of around 3.3 or 5 volts), losses due to charging and discharging parasitic capacitances are minimized.
== Usage ==
I2L is relatively simple to construct on an integrated circuit, and was commonly used before the advent of CMOS logic by companies such as Motorola (now NXP Semiconductors) and Texas Instruments. In 1975, Sinclair Radionics introduced one of the first consumer-grade digital watches, the Black Watch, which used I2L technology.
In 1976, Texas Instruments introduced SBP0400 CPU which used I2L technology.
In the late 1970s, RCA used I²L in its CA3162 ADC 3 digit meter integrated circuit. In 1979, HP introduced a frequency measurement instrument based on a HP-made custom LSI chip that uses integrated injection logic (I2L) for low power consumption and high density, enabling portable battery operation, and also some emitter function logic (EFL) circuits where high speed is needed in its HP 5315A/B.
== References ==
== Further reading ==
Savard, John J. G. (2018) [2005]. "What Computers Are Made From". quadibloc. Archived from the original on 2018-07-02. Retrieved 2018-07-16. | Wikipedia/Integrated_injection_logic |
An integrated device manufacturer (IDM) is a semiconductor company which designs, manufactures, and sells integrated circuit (IC) products.
IDM is often used to refer to a company which handles semiconductor manufacturing in-house, compared to a fabless semiconductor company, which outsources production to a third-party semiconductor fabrication plant.
Examples of IDMs are Intel, Samsung, and Texas Instruments, examples of fabless companies are AMD, Nvidia, Qualcomm, and Zhaoxin, and examples of pure play foundries are GlobalFoundries, TSMC, and UMC.
Due to the dynamic nature of the semiconductor industry, the term IDM has become less accurate than when it was coined.
== OSATs ==
The term OSATs means "outsourced semiconductor assembly and test providers". OSATs have dominated IC packaging and testing.
== Fabless operations ==
The terms fabless (fabrication-less), foundry, and IDM are now used to describe the role a company has in a business relationship. For example, Freescale owns and operates fabrication facilities (fab) where it manufactures many chip product lines, as a traditional IDM would. Yet it is known to contract with merchant foundries for other products, as would fabless companies.
== Manufacturers ==
Many electronic manufacturing companies engage in business that would qualify them as an IDM:
== Reading ==
Understanding fabless IC technology By Jeorge S. Hurtarte, Evert A. Wolsheimer, Lisa M. Tafoya 1.4.1 Integrated device manufacturer Page 8
== References == | Wikipedia/Integrated_device_manufacturer |
A linear integrated circuit or analog chip is a set of miniature electronic analog circuits formed on a single piece of semiconductor material.
== Description ==
The voltage and current at specified points in the circuits of analog chips vary continuously over time. In contrast, digital chips only assign meaning to voltages or currents at discrete levels. In addition to transistors, analog chips often include a larger number of passive elements (capacitors, resistors, and inductors) than digital chips. Inductors tend to be avoided because of their large physical size, and difficulties incorporating them into monolithic semiconductor ICs. Certain circuits such as gyrators can often act as equivalents of inductors, while constructed only from transistors and capacitors.
Analog chips may also contain digital logic elements to replace some analog functions, or to allow the chip to communicate with a microprocessor. For this reason, and since logic is commonly implemented using CMOS technology, these chips typically use BiCMOS processes, as implemented by companies such as Freescale, Texas Instruments, STMicroelectronics, and others. This is known as "mixed signal processing", and allows a designer to incorporate more functions into a single chip. Some of the benefits of this mixed technology include load protection, reduced parts count, and higher reliability.
Purely analog chips in information processing have been mostly replaced with digital chips. Analog chips are still required for wideband signals, high-power applications, and transducer interfaces. Research and industry in this specialty continues to grow and prosper. Some examples of long-lived and well-known analog chips are the 741 operational amplifier, and the 555 timer IC.
Power supply chips are also considered to be analog chips. Their main purpose is to produce a well-regulated output voltage supply for other chips in the system. Since all electronic systems require electrical power, power supply ICs (power management integrated circuits, PMIC) are important elements of those systems.
Important basic building blocks of analog chip design include:
current source
current mirror
differential amplifier
voltage reference, bandgap voltage reference
All the above circuit building blocks can be implemented using bipolar technology as well as metal-oxide-silicon (MOS) technology. MOS band gap references use lateral bipolar transistors for their functioning.
People who have specialized in this field include Bob Widlar, Bob Pease, Hans Camenzind, George Erdi, Jim Williams, and Barrie Gilbert, among others.
== See also ==
List of linear integrated circuits
List of LM-series integrated circuits
4000-series integrated circuits
List of 4000-series integrated circuits
7400-series integrated circuits
List of 7400-series integrated circuits
== References ==
Intelligent Power and Sensing Technologies
CMOS Oscillators (AN-118)
CMOS Schmitt Trigger—A Uniquely Versatile Design Component (AN-140)
HCMOS Crystal Oscillators (AN-340)
== Further reading ==
Designing Analog Chips; Hans Camenzind; Virtual Bookworm; 244 pages; 2005; ISBN 978-1589397187. (Free Book) | Wikipedia/Analog_integrated_circuit |
The 7400 series is a popular logic family of transistor–transistor logic (TTL) integrated circuits (ICs).
In 1964, Texas Instruments introduced the SN5400 series of logic chips, in a ceramic semiconductor package. A low-cost plastic package SN7400 series was introduced in 1966 which quickly gained over 50% of the logic chip market, and eventually becoming de facto standardized electronic components. Since the introduction of the original bipolar-transistor TTL parts, pin-compatible parts were introduced with such features as low power CMOS technology and lower supply voltages. Surface mount packages exist for several popular logic family functions.
== Overview ==
The 7400 series contains hundreds of devices that provide everything from basic logic gates, flip-flops, and counters, to special purpose bus transceivers and arithmetic logic units (ALU). Specific functions are described in a list of 7400 series integrated circuits. Some TTL parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. The less-common 64 and 84 prefixes on Texas Instruments parts indicated an industrial temperature range. Since the 1970s, new product families have been released to replace the original 7400 series. More recent TTL-compatible logic families were manufactured using CMOS or BiCMOS technology rather than TTL.
Today, surface-mounted CMOS versions of the 7400 series are used in various applications in electronics and for glue logic in computers and industrial electronics. The original through-hole devices in dual in-line packages (DIP/DIL) were the mainstay of the industry for many decades. They are useful for rapid breadboard-prototyping and for education and remain available from most manufacturers. The fastest types and very low voltage versions are typically surface-mount only, however.
The first part number in the series, the 7400, is a 14-pin IC containing four two-input NAND gates. Each gate uses two input pins and one output pin, with the remaining two pins being power (+5 V) and ground. This part was made in various through-hole and surface-mount packages, including flat pack and plastic/ceramic dual in-line. Additional characters in a part number identify the package and other variations.
Unlike the older resistor-transistor logic integrated circuits, bipolar TTL gates were unsuitable to be used as analog devices, providing low gain, poor stability, and low input impedance. Special-purpose TTL devices were used to provide interface functions such as Schmitt triggers or monostable multivibrator timing circuits. Inverting gates could be cascaded as a ring oscillator, useful for purposes where high stability was not required.
=== History ===
Although the 7400 series was the first de facto industry standard TTL logic family (i.e. second-sourced by several semiconductor companies), there were earlier TTL logic families such as:
Sylvania Universal High-level Logic in 1963
Motorola MC4000 MTTL
National Semiconductor DM8000
Fairchild 9300 series
Signetics 8200 and 8T00
The 7400 quad 2-input NAND gate was the first product in the series, introduced by Texas Instruments in a military grade metal flat package (5400W) in October 1964. The pin assignment of this early series differed from the de facto standard set by the later series in DIP packages (in particular, ground was connected to pin 11 and the power supply to pin 4, compared to pins 7 and 14 for DIP packages). The extremely popular commercial grade plastic DIP (7400N) followed in the third quarter of 1966.
The 5400 and 7400 series were used in many popular minicomputers in the 1970s and early 1980s. Some models of the DEC PDP-series "minis" used the 74181 ALU as the main computing element in the CPU. Other examples were the Data General Nova series and Hewlett-Packard 21MX, 1000, and 3000 series.
In 1965, typical quantity-one pricing for the SN5400 (military grade, in ceramic welded flat-pack) was around 22 USD. As of 2007, individual commercial-grade chips in molded epoxy (plastic) packages can be purchased for approximately US$0.25 each, depending on the particular chip.
== Families ==
7400 series parts were constructed using bipolar junction transistors (BJT), forming what is referred to as transistor–transistor logic or TTL. Newer series, more or less compatible in function and logic level with the original parts, use CMOS technology or a combination of the two (BiCMOS). Originally the bipolar circuits provided higher speed but consumed more power than the competing 4000 series of CMOS devices. Bipolar devices are also limited to a fixed power-supply voltage, typically 5 V, while CMOS parts often support a range of supply voltages.
Milspec-rated devices for use in extended temperature conditions are available as the 5400 series. Texas Instruments also manufactured radiation-hardened devices with the prefix RSN, and the company offered beam-lead bare dies for integration into hybrid circuits with a BL prefix designation.
Regular-speed TTL parts were also available for a time in the 6400 series – these had an extended industrial temperature range of −40 °C to +85 °C. While companies such as Mullard listed 6400-series compatible parts in 1970 data sheets, by 1973 there was no mention of the 6400 family in the Texas Instruments TTL Data Book. Texas Instruments brought back the 6400 series in 1989 for the SN64BCT540. The SN64BCTxxx series is still in production as of 2023. Some companies have also offered industrial extended temperature range variants using the regular 7400-series part numbers with a prefix or suffix to indicate the temperature grade.
As integrated circuits in the 7400 series were made in different technologies, usually compatibility was retained with the original TTL logic levels and power-supply voltages. An integrated circuit made in CMOS is not a TTL chip, since it uses field-effect transistors (FETs) and not bipolar junction transistors (BJT), but similar part numbers are retained to identify similar logic functions and electrical (power and I/O voltage) compatibility in the different subfamilies.
Over 40 different logic subfamilies use this standardized part number scheme. The headings in the following table are: Vcc – power-supply voltage; tpd – maximum gate delay; IOL – maximum output current at low level; IOH – maximum output current at high level; tpd, IOL, and IOH apply to most gates in a given family. Driver or buffer gates have higher output currents.
Many parts in the CMOS HC, AC, AHC, and VHC families are also offered in "T" versions (HCT, ACT, AHCT and VHCT) which have input thresholds that are compatible with both TTL and 3.3 V CMOS signals. The non-T parts have conventional CMOS input thresholds, which are more restrictive than TTL thresholds. Typically, CMOS input thresholds require high-level signals to be at least 70% of Vcc and low-level signals to be at most 30% of Vcc. (TTL has the input high level above 2.0 V and the input low level below 0.8 V, so a TTL high-level signal could be in the forbidden middle range for 5 V CMOS.)
The 74H family is the same basic design as the 7400 family with resistor values reduced. This reduced the typical propagation delay from 9 ns to 6 ns but increased the power consumption. The 74H family provided a number of unique devices for CPU designs in the 1970s. Many designers of military and aerospace equipment used this family over a long period and as they need exact replacements, this family is still produced by Lansdale Semiconductor.
The 74S family, using Schottky circuitry, uses more power than the 74, but is faster. The 74LS family of ICs is a lower-power version of the 74S family, with slightly higher speed but lower power dissipation than the original 74 family; it became the most popular variant once it was widely available. Many 74LS ICs can be found in microcomputers and digital consumer electronics manufactured in the 1980s and early 1990s.
The 74F family was introduced by Fairchild Semiconductor and adopted by other manufacturers; it is faster than the 74, 74LS and 74S families.
Through the late 1980s and 1990s newer versions of this family were introduced to support the lower operating voltages used in newer CPU devices.
== Part numbering ==
Part number schemes varied by manufacturer. The part numbers for 7400-series logic devices often use the following designators:
Often first, a two or three letter prefix, denoting the manufacturer and flow class of the device. These codes are no longer closely associated with a single manufacturer, for example, Fairchild Semiconductor manufactures parts with MM and DM prefixes, and no prefixes. Examples:
SN: Texas Instruments using a commercial processing
SNV: Texas Instruments using military processing
M: ST Microelectronics
DM: National Semiconductor
UT: Cobham PLC
SG: Sylvania
RD: RIFA AB
Two digits for temperature range. Examples:
54: military temperature range
64: short-lived historical series with intermediate "industrial" temperature range
74: commercial temperature range device
Zero to four letters denoting the logic subfamily. Examples:
zero letters: basic bipolar TTL
LS: low power Schottky
HCT: High-speed CMOS compatible with TTL
Two or more arbitrarily assigned digits that identify the function of the device. There are hundreds of different devices in each family.
Additional suffix letters and numbers may be appended to denote the package type, quality grade, or other information, but this varies widely by manufacturer.
For example, "SN5400N" signifies that the part is a 7400-series IC probably manufactured by Texas Instruments ("SN" originally meaning "Semiconductor Network") using commercial processing, is of the military temperature rating ("54"), and is of the TTL family (absence of a family designator), its function being the quad 2-input NAND gate ("00") implemented in a plastic through-hole DIP package ("N").
Many logic families maintain a consistent use of the device numbers as an aid to designers. Often a part from a different 74x00 subfamily could be substituted ("drop-in replacement") in a circuit, with the same function and pin-out yet more appropriate characteristics for an application (perhaps speed or power consumption), which was a large part of the appeal of the 74C00 series over the competing CD4000B series, for example. But there are a few exceptions where incompatibilities (mainly in pin-out) across the subfamilies occurred, such as:
some flat-pack devices (e.g. 7400W) and surface-mount devices,
some of the faster CMOS series (for example 74AC),
a few low-power TTL devices (e.g. 74L86, 74L9 and 74L95) have a different pin-out than the regular (or even 74LS) series part.
five versions of the 74x54 (4-wide AND-OR-INVERT gates IC), namely 7454(N), 7454W, 74H54, 74L54W and 74L54N/74LS54, are different from each other in pin-out and/or function,
== Second sources from Europe and Eastern Bloc ==
Some manufacturers, such as Mullard and Siemens, had pin-compatible TTL parts, but with a completely different numbering scheme; however, data sheets identified the 7400-compatible number as an aid to recognition.
At the time the 7400 series was being made, some European manufacturers (that traditionally followed the Pro Electron naming convention), such as Philips/Mullard, produced a series of TTL integrated circuits with part names beginning with FJ. Some examples of FJ series are:
FJH101 (=7430) single 8-input NAND gate,
FJH131 (=7400) quadruple 2-input NAND gate,
FJH181 (=7454N or J) 2+2+2+2 input AND-OR-NOT gate.
The Soviet Union started manufacturing TTL ICs with 7400-series pinout in the late 1960s and early 1970s, such as the K155ЛA3, which was pin-compatible with the 7400 part available in the United States, except for using a metric spacing of 2.5 mm between pins instead of the 0.1 inches (2.54 mm) pin-to-pin spacing used in the west.
Another peculiarity of the Soviet-made 7400 series was the packaging material used in the 1970s–1980s. Instead of the ubiquitous black resin, they had a brownish-green body colour with subtle swirl marks created during the moulding process. It was jokingly referred to in the Eastern Bloc electronics industry as the "elephant-dung packaging", due to its appearance.
The Soviet integrated circuit designation is different from the Western series:
the technology modifications were considered different series and were identified by different numbered prefixes – К155 series is equivalent to plain 74, К555 series is 74LS, К1533 is 74ALS, etc.;
the function of the unit is described with a two-letter code followed by a number:
the first letter represents the functional group – logical, triggers, counters, multiplexers, etc.;
the second letter shows the functional subgroup, making the distinction between logical NAND and NOR, D- and JK-triggers, decimal and binary counters, etc.;
the number distinguishes variants with different number of inputs or different number of elements within a die – ЛА1/ЛА2/ЛА3 (LA1/LA2/LA3) are 2 four-input / 1 eight-input / 4 two-input NAND elements respectively (equivalent to 7420/7430/7400).
Before July 1974 the two letters from the functional description were inserted after the first digit of the series. Examples: К1ЛБ551 and К155ЛА1 (7420), К1ТМ552 and К155ТМ2 (7474) are the same ICs made at different times.
Clones of the 7400 series were also made in other Eastern Bloc countries:
Bulgaria (Mikroelektronika Botevgrad) used a designation somewhat similar to that of the Soviet Union, e.g. 1ЛБ00ШМ (1LB00ShM) for a 74LS00. Some of the two-letter functional groups were borrowed from the Soviet designation, while others differed. Unlike the Soviet scheme, the two or three digit number after the functional group matched the western counterpart. The series followed at the end (i.e. ШМ for LS). Only the LS series is known to have been manufactured in Bulgaria.: 8–11
Czechoslovakia (TESLA) used the 7400 numbering scheme with manufacturer prefix MH. Example: MH7400. Tesla also produced industrial grade (8400, −25 ° to 85 °C) and military grade (5400, −55 ° to 125 °C) ones.
Poland (Unitra CEMI) used the 7400 numbering scheme with manufacturer prefixes UCA for the 5400 and 6400 series, as well as UCY for the 7400 series. Examples: UCA6400, UCY7400. Note that ICs with the prefix MCY74 correspond to the 4000 series (e.g. MCY74002 corresponds to 4002 and not to 7402).
Hungary (Tungsram, later Mikroelektronikai Vállalat / MEV) also used the 7400 numbering scheme, but with manufacturer suffix – 7400 is marked as 7400APC.
Romania (I.P.R.S.) used a trimmed 7400 numbering with the manufacturer prefix CDB (example: CDB4123E corresponds to 74123) for the 74 and 74H series, where the suffix H indicated the 74H series. For the later 74LS series, the standard numbering was used.
East Germany (HFO) also used trimmed 7400 numbering without manufacturer prefix or suffix. The prefix D (or E) designates digital IC, and not the manufacturer. Example: D174 is 7474. 74LS clones were designated by the prefix DL; e.g. DL000 = 74LS00. In later years East German made clones were also available with standard 74* numbers, usually for export.
A number of different technologies were available from the Soviet Union,
Czechoslovakia,
Poland, and East Germany. The 8400 series in the table below indicates an industrial temperature range from −25 °C to +85 °C (as opposed to −40 °C to +85 °C for the 6400 series).
Around 1990 the production of standard logic ceased in all Eastern European countries except the Soviet Union and later Russia and Belarus. As of 2016, the series 133, К155, 1533, КР1533, 1554, 1594, and 5584 were in production at "Integral" in Belarus,
as well as the series 130 and 530 at "NZPP-KBR",
134 and 5574 at "VZPP",
533 at "Svetlana",
1564, К1564, КР1564 at "NZPP",
1564, К1564 at "Voshod",
1564 at "Exiton",
and 133, 530, 533, 1533 at "Mikron" in Russia.
The Russian company Angstrem manufactures 54HC circuits as the 5514БЦ1 series, 54AC as the 5514БЦ2 series, and 54LVC as the 5524БЦ2 series.
As of 2024, the 133, 136, and 1533 series are in production at Kvazar Kyiv in Ukraine.
== See also ==
== References ==
== Further reading ==
Books
50 Circuits Using 7400 Series IC's; 1st Ed; R.N. Soar; Bernard Babani Publishing; 76 pages; 1979; ISBN 0900162775. (archive)
TTL Cookbook; 1st Ed; Don Lancaster; Sams Publishing; 412 pages; 1974; ISBN 978-0672210358. (archive)
Designing with TTL Integrated Circuits; 1st Ed; Robert Morris, John Miller; Texas Instruments and McGraw-Hill; 322 pages; 1971; ISBN 978-0070637450. (archive)
App Notes
Understanding and Interpreting Standard-Logic Data Sheets; Stephen Nolan, Jose Soltero, Shreyas Rao; Texas Instruments; 60 pages; 2016.
Comparison of 74HC / 74S / 74LS / 74ALS Logic; Fairchild; 6 pages, 1983.
Interfacing to 74HC Logic; Fairchild; 10 pages; 1998.
74AHC / 74AHCT Designer's Guide; TI; 53pages; 1998. Compares 74HC / 74AHC / 74AC (CMOS I/O) and 74HCT / 74AHCT / 74ACT (TTL I/O).
Fairchild Semiconductor / ON Semiconductor
Historical Data Books: TTL (1978, 752 pages), FAST (1981, 349 pages)
Logic Selection Guide (2008, 12 pages)
Nexperia / NXP Semiconductor
Logic Selection Guide (2020, 234 pages)
Logic Application Handbook Design Engineer's Guide' (2021, 157 pages)
Logic Translators' (2021, 62 pages)
Texas Instruments / National Semiconductor
Historical Catalog: (1967, 375 pages)
Historical Databooks: TTL Vol1 (1984, 339 pages), TTL Vol2 (1985, 1402 pages), TTL Vol3 (1984, 793 pages), TTL Vol4 (1986, 445 pages)
Digital Logic Pocket Data Book (2007, 794 pages), Logic Reference Guide (2004, 8 pages), Logic Selection Guide (1998, 215 pages)
Little Logic Guide (2018, 25 pages), Little Logic Selection Guide (2004, 24 pages)
Toshiba
General-Purpose Logic ICs (2012, 55 pages)
== External links ==
Understanding 7400-series digital logic ICs - Nuts and Volts magazine
Thorough list of 7400-series ICs - Electronics Club | Wikipedia/7400-series_integrated_circuits |
Nanoelectromechanical systems (NEMS) are a class of devices integrating electrical and mechanical functionality on the nanoscale. NEMS form the next logical miniaturization step from so-called microelectromechanical systems, or MEMS devices. NEMS typically integrate transistor-like nanoelectronics with mechanical actuators, pumps, or motors, and may thereby form physical, biological, and chemical sensors. The name derives from typical device dimensions in the nanometer range, leading to low mass, high mechanical resonance frequencies, potentially large quantum mechanical effects such as zero point motion, and a high surface-to-volume ratio useful for surface-based sensing mechanisms. Applications include accelerometers and sensors to detect chemical substances in the air.
== History ==
=== Background ===
As noted by Richard Feynman in his famous talk in 1959, "There's Plenty of Room at the Bottom," there are many potential applications of machines at smaller and smaller sizes; by building and controlling devices at smaller scales, all technology benefits. The expected benefits include greater efficiencies and reduced size, decreased power consumption and lower costs of production in electromechanical systems.
The first silicon dioxide field effect transistors were built by Frosch and Derick in 1957 at Bell Labs. In 1960, Atalla and Kahng at Bell Labs fabricated a MOSFET with a gate oxide thickness of 100 nm. In 1962, Atalla and Kahng fabricated a nanolayer-base metal–semiconductor junction (M–S junction) transistor that used gold (Au) thin films with a thickness of 10 nm. In 1987, Bijan Davari led an IBM research team that demonstrated the first MOSFET with a 10 nm oxide thickness. Multi-gate MOSFETs enabled scaling below 20 nm channel length, starting with the FinFET. The FinFET originates from the research of Digh Hisamoto at Hitachi Central Research Laboratory in 1989. At UC Berkeley, a group led by Hisamoto and TSMC's Chenming Hu fabricated FinFET devices down to 17 nm channel length in 1998.
=== NEMS ===
In 2000, the first very-large-scale integration (VLSI) NEMS device was demonstrated by researchers at IBM. Its premise was an array of AFM tips which can heat/sense a deformable substrate in order to function as a memory device (Millipede memory). Further devices have been described by Stefan de Haan. In 2007, the International Technical Roadmap for Semiconductors (ITRS) contains NEMS memory as a new entry for the Emerging Research Devices section.
== Atomic force microscopy ==
A key application of NEMS is atomic force microscope tips. The increased sensitivity achieved by NEMS leads to smaller and more efficient sensors to detect stresses, vibrations, forces at the atomic level, and chemical signals. AFM tips and other detection at the nanoscale rely heavily on NEMS.
== Approaches to miniaturization ==
Two complementary approaches to fabrication of NEMS can be found, the top-down approach and the bottom-up approach.
The top-down approach uses the traditional microfabrication methods, i.e. optical, electron-beam lithography and thermal treatments, to manufacture devices. While being limited by the resolution of these methods, it allows a large degree of control over the resulting structures. In this manner devices such as nanowires, nanorods, and patterned nanostructures are fabricated from metallic thin films or etched semiconductor layers. For top-down approaches, increasing surface area to volume ratio enhances the reactivity of nanomaterials.
Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to self-organize or self-assemble into some useful conformation, or rely on positional assembly. These approaches utilize the concepts of molecular self-assembly and/or molecular recognition. This allows fabrication of much smaller structures, albeit often at the cost of limited control of the fabrication process. Furthermore, while there are residue materials removed from the original structure for the top-down approach, minimal material is removed or wasted for the bottom-up approach.
A combination of these approaches may also be used, in which nanoscale molecules are integrated into a top-down framework. One such example is the carbon nanotube nanomotor.
== Materials ==
=== Carbon allotropes ===
Many of the commonly used materials for NEMS technology have been carbon based, specifically diamond, carbon nanotubes and graphene. This is mainly because of the useful properties of carbon based materials which directly meet the needs of NEMS. The mechanical properties of carbon (such as large Young's modulus) are fundamental to the stability of NEMS while the metallic and semiconductor conductivities of carbon based materials allow them to function as transistors.
Both graphene and diamond exhibit high Young's modulus, low density, low friction, exceedingly low mechanical dissipation, and large surface area. The low friction of CNTs, allow practically frictionless bearings and has thus been a huge motivation towards practical applications of CNTs as constitutive elements in NEMS, such as nanomotors, switches, and high-frequency oscillators. Carbon nanotubes and graphene's physical strength allows carbon based materials to meet higher stress demands, when common materials would normally fail and thus further support their use as a major materials in NEMS technological development.
Along with the mechanical benefits of carbon based materials, the electrical properties of carbon nanotubes and graphene allow it to be used in many electrical components of NEMS. Nanotransistors have been developed for both carbon nanotubes as well as graphene. Transistors are one of the basic building blocks for all electronic devices, so by effectively developing usable transistors, carbon nanotubes and graphene are both very crucial to NEMS.
Nanomechanical resonators are frequently made of graphene. As NEMS resonators are scaled down in size, there is a general trend for a decrease in quality factor in inverse proportion to surface area to volume ratio. However, despite this challenge, it has been experimentally proven to reach a quality factor as high as 2400. The quality factor describes the purity of tone of the resonator's vibrations. Furthermore, it has been theoretically predicted that clamping graphene membranes on all sides yields increased quality numbers. Graphene NEMS can also function as mass, force, and position sensors.
==== Metallic carbon nanotubes ====
Carbon nanotubes (CNTs) are allotropes of carbon with a cylindrical nanostructure. They can be considered a rolled up graphene. When rolled at specific and discrete ("chiral") angles, and the combination of the rolling angle and radius decides whether the nanotube has a bandgap (semiconducting) or no bandgap (metallic).
Metallic carbon nanotubes have also been proposed for nanoelectronic interconnects since they can carry high current densities. This is a useful property as wires to transfer current are another basic building block of any electrical system. Carbon nanotubes have specifically found so much use in NEMS that methods have already been discovered to connect suspended carbon nanotubes to other nanostructures. This allows carbon nanotubes to form complicated nanoelectric systems. Because carbon based products can be properly controlled and act as interconnects as well as transistors, they serve as a fundamental material in the electrical components of NEMS.
==== CNT-based NEMS switches ====
A major disadvantage of MEMS switches over NEMS switches are limited microsecond range switching speeds of MEMS, which impedes performance for high speed applications. Limitations on switching speed and actuation voltage can be overcome by scaling down devices from micro to nanometer scale. A comparison of performance parameters between carbon nanotube (CNT)-based NEMS switches with its counterpart CMOS revealed that CNT-based NEMS switches retained performance at lower levels of energy consumption and had a subthreshold leakage current several orders of magnitude smaller than that of CMOS switches. CNT-based NEMS with doubly clamped structures are being further studied as potential solutions for floating gate nonvolatile memory applications.
==== Difficulties ====
Despite all of the useful properties of carbon nanotubes and graphene for NEMS technology, both of these products face several hindrances to their implementation. One of the main problems is carbon's response to real life environments. Carbon nanotubes exhibit a large change in electronic properties when exposed to oxygen. Similarly, other changes to the electronic and mechanical attributes of carbon based materials must fully be explored before their implementation, especially because of their high surface area which can easily react with surrounding environments. Carbon nanotubes were also found to have varying conductivities, being either metallic or semiconducting depending on their helicity when processed. Because of this, special treatment must be given to the nanotubes during processing to assure that all of the nanotubes have appropriate conductivities. Graphene also has complicated electric conductivity properties compared to traditional semiconductors because it lacks an energy band gap and essentially changes all the rules for how electrons move through a graphene based device. This means that traditional constructions of electronic devices will likely not work and completely new architectures must be designed for these new electronic devices.
==== Nanoelectromechanical accelerometer ====
Graphene's mechanical and electronic properties have made it favorable for integration into NEMS accelerometers, such as small sensors and actuators for heart monitoring systems and mobile motion capture. The atomic scale thickness of graphene provides a pathway for accelerometers to be scaled down from micro to nanoscale while retaining the system's required sensitivity levels.
By suspending a silicon proof mass on a double-layer graphene ribbon, a nanoscale spring-mass and piezoresistive transducer can be made with the capability of currently produced transducers in accelerometers. The spring mass provides greater accuracy, and the piezoresistive properties of graphene converts the strain from acceleration to electrical signals for the accelerometer. The suspended graphene ribbon simultaneously forms the spring and piezoresistive transducer, making efficient use of space in while improving performance of NEMS accelerometers.
=== Polydimethylsiloxane (PDMS) ===
Failures arising from high adhesion and friction are of concern for many NEMS. NEMS frequently utilize silicon due to well-characterized micromachining techniques; however, its intrinsic stiffness often hinders the capability of devices with moving parts.
A study conducted by Ohio State researchers compared the adhesion and friction parameters of a single crystal silicon with native oxide layer against PDMS coating. PDMS is a silicone elastomer that is highly mechanically tunable, chemically inert, thermally stable, permeable to gases, transparent, non-fluorescent, biocompatible, and nontoxic. Inherent to polymers, the Young's Modulus of PDMS can vary over two orders of magnitude by manipulating the extent of crosslinking of polymer chains, making it a viable material in NEMS and biological applications. PDMS can form a tight seal with silicon and thus be easily integrated into NEMS technology, optimizing both mechanical and electrical properties. Polymers like PDMS are beginning to gain attention in NEMS due to their comparatively inexpensive, simplified, and time-efficient prototyping and manufacturing.
Rest time has been characterized to directly correlate with adhesive force, and increased relative humidity lead to an increase of adhesive forces for hydrophilic polymers. Contact angle measurements and Laplace force calculations support the characterization of PDMS's hydrophobic nature, which expectedly corresponds with its experimentally verified independence to relative humidity. PDMS’ adhesive forces are also independent of rest time, capable of versatilely performing under varying relative humidity conditions, and possesses a lower coefficient of friction than that of Silicon. PDMS coatings facilitate mitigation of high-velocity problems, such as preventing sliding. Thus, friction at contact surfaces remains low even at considerably high velocities. In fact, on the microscale, friction reduces with increasing velocity. The hydrophobicity and low friction coefficient of PDMS have given rise to its potential in being further incorporated within NEMS experiments that are conducted at varying relative humidities and high relative sliding velocities.
==== PDMS-coated piezoresistive nanoelectromechanical systems diaphragm ====
PDMS is frequently used within NEMS technology. For instance, PDMS coating on a diaphragm can be used for chloroform vapor detection.
Researchers from the National University of Singapore invented a polydimethylsiloxane (PDMS)-coated nanoelectromechanical system diaphragm embedded with silicon nanowires (SiNWs) to detect chloroform vapor at room temperature. In the presence of chloroform vapor, the PDMS film on the micro-diaphragm absorbs vapor molecules and consequently enlarges, leading to deformation of the micro-diaphragm. The SiNWs implanted within the micro-diaphragm are linked in a Wheatstone bridge, which translates the deformation into a quantitative output voltage. In addition, the micro-diaphragm sensor also demonstrates low-cost processing at low power consumption. It possesses great potential for scalability, ultra-compact footprint, and CMOS-IC process compatibility. By switching the vapor-absorption polymer layer, similar methods can be applied that should theoretically be able to detect other organic vapors.
In addition to its inherent properties discussed in the Materials section, PDMS can be used to absorb chloroform, whose effects are commonly associated with swelling and deformation of the micro-diaphragm; various organic vapors were also gauged in this study. With good aging stability and appropriate packaging, the degradation rate of PDMS in response to heat, light, and radiation can be slowed.
=== Biohybrid NEMS ===
The emerging field of bio-hybrid systems combines biological and synthetic structural elements for biomedical or robotic applications. The constituting elements of bio-nanoelectromechanical systems (BioNEMS) are of nanoscale size, for example DNA, proteins or nanostructured mechanical parts. Examples include the facile top-down nanostructuring of thiol-ene polymers to create cross-linked and mechanically robust nanostructures that are subsequently functionalized with proteins.
== Simulations ==
Computer simulations have long been important counterparts to experimental studies of NEMS devices. Through continuum mechanics and molecular dynamics (MD), important behaviors of NEMS devices can be predicted via computational modeling before engaging in experiments. Additionally, combining continuum and MD techniques enables engineers to efficiently analyze the stability of NEMS devices without resorting to ultra-fine meshes and time-intensive simulations. Simulations have other advantages as well: they do not require the time and expertise associated with fabricating NEMS devices; they can effectively predict the interrelated roles of various electromechanical effects; and parametric studies can be conducted fairly readily as compared with experimental approaches. For example, computational studies have predicted the charge distributions and “pull-in” electromechanical responses of NEMS devices. Using simulations to predict mechanical and electrical behavior of these devices can help optimize NEMS device design parameters.
== Reliability and Life Cycle of NEMS ==
=== Reliability and Challenges ===
Reliability provides a quantitative measure of the component's integrity and performance without failure for a specified product lifetime. Failure of NEMS devices can be attributed to a variety of sources, such as mechanical, electrical, chemical, and thermal factors. Identifying failure mechanisms, improving yield, scarcity of information, and reproducibility issues have been identified as major challenges to achieving higher levels of reliability for NEMS devices. Such challenges arise during both manufacturing stages (i.e. wafer processing, packaging, final assembly) and post-manufacturing stages (i.e. transportation, logistics, usage).
==== Packaging ====
Packaging challenges often account for 75–95% of the overall costs of MEMS and NEMS. Factors of wafer dicing, device thickness, sequence of final release, thermal expansion, mechanical stress isolation, power and heat dissipation, creep minimization, media isolation, and protective coatings are considered by packaging design to align with the design of the MEMS or NEMS component. Delamination analysis, motion analysis, and life-time testing have been used to assess wafer-level encapsulation techniques, such as cap to wafer, wafer to wafer, and thin film encapsulation. Wafer-level encapsulation techniques can lead to improved reliability and increased yield for both micro and nanodevices.
==== Manufacturing ====
Assessing the reliability of NEMS in early stages of the manufacturing process is essential for yield improvement. Forms of surface forces, such as adhesion and electrostatic forces, are largely dependent on surface topography and contact geometry. Selective manufacturing of nano-textured surfaces reduces contact area, improving both adhesion and friction performance for NEMS. Furthermore, the implementation of nanopost to engineered surfaces increase hydrophobicity, leading to a reduction in both adhesion and friction.
Adhesion and friction can also be manipulated by nanopatterning to adjust surface roughness for the appropriate applications of the NEMS device. Researchers from Ohio State University used atomic/friction force microscopy (AFM/FFM) to examine the effects of nanopatterning on hydrophobicity, adhesion, and friction for hydrophilic polymers with two types of patterned asperities (low aspect ratio and high aspect ratio). Roughness on hydrophilic surfaces versus hydrophobic surfaces are found to have inversely correlated and directly correlated relationships respectively.
Due to its large surface area to volume ratio and sensitivity, adhesion and friction can impede performance and reliability of NEMS devices. These tribological issues arise from natural down-scaling of these tools; however, the system can be optimized through the manipulation of the structural material, surface films, and lubricant. In comparison to undoped Si or polysilicon films, SiC films possess the lowest frictional output, resulting in increased scratch resistance and enhanced functionality at high temperatures. Hard diamond-like carbon (DLC) coatings exhibit low friction, high hardness and wear resistance, in addition to chemical and electrical resistances. Roughness, a factor that reduces wetting and increases hydrophobicity, can be optimized by increasing the contact angle to reduce wetting and allow for low adhesion and interaction of the device to its environment.
Material properties are size-dependent. Therefore, analyzing the unique characteristics on NEMS and nano-scale material becomes increasingly important to retaining reliability and long-term stability of NEMS devices. Some mechanical properties, such as hardness, elastic modulus, and bend tests, for nano-materials are determined by using a nano indenter on a material that has undergone fabrication processes. These measurements, however, do not consider how the device will operate in industry under prolonged or cyclic stresses and strains. The theta structure is a NEMS model that exhibits unique mechanical properties. Composed of Si, the structure has high strength and is able to concentrate stresses at the nanoscale to measure certain mechanical properties of materials.
==== Residual stresses ====
To increase reliability of structural integrity, characterization of both material structure and intrinsic stresses at appropriate length scales becomes increasingly pertinent. Effects of residual stresses include but are not limited to fracture, deformation, delamination, and nanosized structural changes, which can result in failure of operation and physical deterioration of the device.
Residual stresses can influence electrical and optical properties. For instance, in various photovoltaic and light emitting diodes (LED) applications, the band gap energy of semiconductors can be tuned accordingly by the effects of residual stress.
Atomic force microscopy (AFM) and Raman spectroscopy can be used to characterize the distribution of residual stresses on thin films in terms of force volume imaging, topography, and force curves. Furthermore, residual stress can be used to measure nanostructures’ melting temperature by using differential scanning calorimetry (DSC) and temperature dependent X-ray Diffraction (XRD).
== Future ==
Key hurdles currently preventing the commercial application of many NEMS devices include low-yields and high device quality variability. Before NEMS devices can actually be implemented, reasonable integrations of carbon based products must be created. A recent step in that direction has been demonstrated for diamond, achieving a processing level comparable to that of silicon. The focus is currently shifting from experimental work towards practical applications and device structures that will implement and profit from such novel devices. The next challenge to overcome involves understanding all of the properties of these carbon-based tools, and using the properties to make efficient and durable NEMS with low failure rates.
Carbon-based materials have served as prime materials for NEMS use, because of their exceptional mechanical and electrical properties.
Recently, nanowires of chalcogenide glasses have shown to be a key platform to design tunable NEMS owing to the availability of active modulation of Young's modulus.
The global market of NEMS is projected to reach $108.88 million by 2022.
== Applications ==
Nanoelectromechanical relay
Nanoelectromechanical systems mass spectrometer
=== Nanoelectromechanical-based cantilevers ===
Researchers from the California Institute of Technology developed a NEM-based cantilever with mechanical resonances up to very high frequencies (VHF). It is incorporation of electronic displacement transducers based on piezoresistive thin metal film facilitates unambiguous and efficient nanodevice readout. The functionalization of the device's surface using a thin polymer coating with high partition coefficient for the targeted species enables NEMS-based cantilevers to provide chemisorption measurements at room temperature with mass resolution at less than one attogram. Further capabilities of NEMS-based cantilevers have been exploited for the applications of sensors, scanning probes, and devices operating at very high frequency (100 MHz).
== References == | Wikipedia/Nanoelectromechanical_systems |
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed.
The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are: vending machines, which dispense products when the proper combination of coins is deposited; elevators, whose sequence of stops is determined by the floors requested by riders; traffic lights, which change sequence when cars are waiting; combination locks, which require the input of a sequence of numbers in the proper order.
The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot. This is because an FSM's memory is limited by the number of states it has. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. FSMs are studied in the more general field of automata theory.
== Example: coin-operated turnstile ==
An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted.
Considered as a state machine, the turnstile has two possible states: Locked and Unlocked. There are two possible inputs that affect its state: putting a coin in the slot (coin) and pushing the arm (push). In the locked state, pushing on the arm has no effect; no matter how many times the input push is given, it stays in the locked state. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect; that is, giving additional coin inputs does not change the state. A customer pushing through the arms gives a push input and resets the state to Locked.
The turnstile state machine can be represented by a state-transition table, showing for each possible state, the transitions between them (based upon the inputs given to the machine) and the outputs resulting from each input:
The turnstile state machine can also be represented by a directed graph called a state diagram (above). Each state is represented by a node (circle). Edges (arrows) show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition. An input that doesn't cause a change of state (such as a coin input in the Unlocked state) is represented by a circular arrow returning to the original state. The arrow into the Locked node from the black dot indicates it is the initial state.
== Concepts and terminology ==
A state is a description of the status of a system that is waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received.
For example, when using an audio system to listen to the radio (the system is in the "radio" state), receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state.
In some finite-state machine representations, it is also possible to associate actions with a state:
an entry action: performed when entering the state, and
an exit action: performed when exiting the state.
== Representations ==
=== State/Event table ===
Several state-transition table types are used. The most common representation is shown below: the combination of current state (e.g. B) and input (e.g. Y) shows the next state (e.g. C). By itself, the table cannot completely describe the action, so it is common to use footnotes. Other related representations may not have this limitation. For example, an FSM definition including the full action's information is possible using state tables (see also virtual finite-state machine).
=== UML state machines ===
The Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite-state machines while retaining their main benefits. UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of both Mealy machines and Moore machines. They support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines.
=== SDL state machines ===
The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition:
send an event
receive an event
start a timer
cancel a timer
start another concurrent state machine
decision
SDL embeds basic data types called "Abstract Data Types", an action language, and an execution semantic in order to make the finite-state machine executable.
=== Other state diagrams ===
There are a large number of variants to represent an FSM such as the one in figure 3.
== Usage ==
In addition to their use in modeling reactive systems presented here, finite-state machines are significant in many different areas, including electrical engineering, linguistics, computer science, philosophy, biology, mathematics, video game programming, and logic. Finite-state machines are a class of automata studied in automata theory and the theory of computation.
In computer science, finite-state machines are widely used in modeling of application behavior (control theory), design of hardware digital systems, software engineering, compilers, network protocols, and computational linguistics.
== Classification ==
Finite-state machines can be subdivided into acceptors, classifiers, transducers and sequencers.
=== Acceptors ===
Acceptors (also called detectors or recognizers) produce binary output, indicating whether or not the received input is accepted. Each state of an acceptor is either accepting or non accepting. Once all input has been received, if the current state is an accepting state, the input is accepted; otherwise it is rejected. As a rule, input is a sequence of symbols (characters); actions are not used. The start state can also be an accepting state, in which case the acceptor accepts the empty string. The example in figure 4 shows an acceptor that accepts the string "nice". In this acceptor, the only accepting state is state 7.
A (possibly infinite) set of symbol sequences, called a formal language, is a regular language if there is some acceptor that accepts exactly that set. For example, the set of binary strings with an even number of zeroes is a regular language (cf. Fig. 5), while the set of all strings whose length is a prime number is not.
An acceptor could also be described as defining a language that would contain every string accepted by the acceptor but none of the rejected ones; that language is accepted by the acceptor. By definition, the languages accepted by acceptors are the regular languages.
The problem of determining the language accepted by a given acceptor is an instance of the algebraic path problem—itself a generalization of the shortest path problem to graphs with edges weighted by the elements of an (arbitrary) semiring.
An example of an accepting state appears in Fig. 5: a deterministic finite automaton (DFA) that detects whether the binary input string contains an even number of 0s.
S1 (which is also the start state) indicates the state at which an even number of 0s has been input. S1 is therefore an accepting state. This acceptor will finish in an accept state, if the binary string contains an even number of 0s (including any binary string containing no 0s). Examples of strings accepted by this acceptor are ε (the empty string), 1, 11, 11..., 00, 010, 1010, 10110, etc.
=== Classifiers ===
Classifiers are a generalization of acceptors that produce n-ary output where n is strictly greater than two.
=== Transducers ===
Transducers produce output based on a given input and/or a state using actions. They are used for control applications and in the field of computational linguistics.
In control applications, two types are distinguished:
Moore machine
The FSM uses only entry actions, i.e., output depends only on state. The advantage of the Moore model is a simplification of the behaviour. Consider an elevator door. The state machine recognizes two commands: "command_open" and "command_close", which trigger state changes. The entry action (E:) in state "Opening" starts a motor opening the door, the entry action in state "Closing" starts a motor in the other direction closing the door. States "Opened" and "Closed" stop the motor when fully opened or closed. They signal to the outside world (e.g., to other state machines) the situation: "door is open" or "door is closed".
Mealy machine
The FSM also uses input actions, i.e., output depends on input and state. The use of a Mealy FSM leads often to a reduction of the number of states. The example in figure 7 shows a Mealy FSM implementing the same behaviour as in the Moore example (the behaviour depends on the implemented FSM execution model and will work, e.g., for virtual FSM but not for event-driven FSM). There are two input actions (I:): "start motor to close the door if command_close arrives" and "start motor in the other direction to open the door if command_open arrives". The "opening" and "closing" intermediate states are not shown.
=== Sequencers ===
Sequencers (also called generators) are a subclass of acceptors and transducers that have a single-letter input alphabet. They produce only one sequence, which can be seen as an output sequence of acceptor or transducer outputs.
=== Determinism ===
A further distinction is between deterministic (DFA) and non-deterministic (NFA, GNFA) automata. In a deterministic automaton, every state has exactly one transition for each possible input. In a non-deterministic automaton, an input can lead to one, more than one, or no transition for a given state. The powerset construction algorithm can transform any nondeterministic automaton into a (usually more complex) deterministic automaton with identical functionality.
A finite-state machine with only one state is called a "combinatorial FSM". It only allows actions upon transition into a state. This concept is useful in cases where a number of finite-state machines are required to work together, and when it is convenient to consider a purely combinatorial part as a form of FSM to suit the design tools.
== Alternative semantics ==
There are other sets of semantics available to represent state machines. For example, there are tools for modeling and designing logic for embedded controllers. They combine hierarchical state machines (which usually have more than one current state), flow graphs, and truth tables into one language, resulting in a different formalism and set of semantics. These charts, like Harel's original state machines, support hierarchically nested states, orthogonal regions, state actions, and transition actions.
== Mathematical model ==
In accordance with the general classification, the following formal definitions are found.
A deterministic finite-state machine or deterministic finite-state acceptor is a quintuple
(
Σ
,
S
,
s
0
,
δ
,
F
)
{\displaystyle (\Sigma ,S,s_{0},\delta ,F)}
, where:
Σ
{\displaystyle \Sigma }
is the input alphabet (a finite non-empty set of symbols);
S
{\displaystyle S}
is a finite non-empty set of states;
s
0
{\displaystyle s_{0}}
is an initial state, an element of
S
{\displaystyle S}
;
δ
{\displaystyle \delta }
is the state-transition function:
δ
:
S
×
Σ
→
S
{\displaystyle \delta :S\times \Sigma \rightarrow S}
(in a nondeterministic finite automaton it would be
δ
:
S
×
Σ
→
P
(
S
)
{\displaystyle \delta :S\times \Sigma \rightarrow {\mathcal {P}}(S)}
, i.e.
δ
{\displaystyle \delta }
would return a set of states);
F
{\displaystyle F}
is the set of final states, a (possibly empty) subset of
S
{\displaystyle S}
.
For both deterministic and non-deterministic FSMs, it is conventional to allow
δ
{\displaystyle \delta }
to be a partial function, i.e.
δ
(
s
,
x
)
{\displaystyle \delta (s,x)}
does not have to be defined for every combination of
s
∈
S
{\displaystyle s\in S}
and
x
∈
Σ
{\displaystyle x\in \Sigma }
. If an FSM
M
{\displaystyle M}
is in a state
s
{\displaystyle s}
, the next symbol is
x
{\displaystyle x}
and
δ
(
s
,
x
)
{\displaystyle \delta (s,x)}
is not defined, then
M
{\displaystyle M}
can announce an error (i.e. reject the input). This is useful in definitions of general state machines, but less useful when transforming the machine. Some algorithms in their default form may require total functions.
A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. That is, each formal language accepted by a finite-state machine is accepted by such a kind of restricted Turing machine, and vice versa.
A finite-state transducer is a sextuple
(
Σ
,
Γ
,
S
,
s
0
,
δ
,
ω
)
{\displaystyle (\Sigma ,\Gamma ,S,s_{0},\delta ,\omega )}
, where:
Σ
{\displaystyle \Sigma }
is the input alphabet (a finite non-empty set of symbols);
Γ
{\displaystyle \Gamma }
is the output alphabet (a finite non-empty set of symbols);
S
{\displaystyle S}
is a finite non-empty set of states;
s
0
{\displaystyle s_{0}}
is the initial state, an element of
S
{\displaystyle S}
;
δ
{\displaystyle \delta }
is the state-transition function:
δ
:
S
×
Σ
→
S
{\displaystyle \delta :S\times \Sigma \rightarrow S}
;
ω
{\displaystyle \omega }
is the output function.
If the output function depends on the state and input symbol (
ω
:
S
×
Σ
→
Γ
{\displaystyle \omega :S\times \Sigma \rightarrow \Gamma }
) that definition corresponds to the Mealy model, and can be modelled as a Mealy machine. If the output function depends only on the state (
ω
:
S
→
Γ
{\displaystyle \omega :S\rightarrow \Gamma }
) that definition corresponds to the Moore model, and can be modelled as a Moore machine. A finite-state machine with no output function at all is known as a semiautomaton or transition system.
If we disregard the first output symbol of a Moore machine,
ω
(
s
0
)
{\displaystyle \omega (s_{0})}
, then it can be readily converted to an output-equivalent Mealy machine by setting the output function of every Mealy transition (i.e. labeling every edge) with the output symbol given of the destination Moore state. The converse transformation is less straightforward because a Mealy machine state may have different output labels on its incoming transitions (edges). Every such state needs to be split in multiple Moore machine states, one for every incident output symbol.
== Optimization ==
Optimizing an FSM means finding a machine with the minimum number of states that performs the same function. The fastest known algorithm doing this is the Hopcroft minimization algorithm. Other techniques include using an implication table, or the Moore reduction procedure. Additionally, acyclic FSAs can be minimized in linear time.
== Implementation ==
=== Hardware applications ===
In a digital circuit, an FSM may be built using a programmable logic device, a programmable logic controller, logic gates and flip flops or relays. More specifically, a hardware implementation requires a register to store state variables, a block of combinational logic that determines the state transition, and a second block of combinational logic that determines the output of an FSM. One of the classic hardware implementations is the Richards controller.
In a Medvedev machine, the output is directly connected to the state flip-flops minimizing the time delay between flip-flops and output.
Through state encoding for low power state machines may be optimized to minimize power consumption.
=== Software applications ===
The following concepts are commonly used to build software applications with finite-state machines:
Automata-based programming
Event-driven finite-state machine
Virtual finite-state machine
State design pattern
=== Finite-state machines and compilers ===
Finite automata are often used in the frontend of programming language compilers. Such a frontend may comprise several finite-state machines that implement a lexical analyzer and a parser.
Starting from a sequence of characters, the lexical analyzer builds a sequence of language tokens (such as reserved words, literals, and identifiers) from which the parser builds a syntax tree. The lexical analyzer and the parser handle the regular and context-free parts of the programming language's grammar.
== See also ==
== References ==
== Sources ==
Hopcroft, John E.; Ullman, Jeffrey D. (1979). Introduction to Automata Theory, Languages, and Computation (1st ed.). Addison-Wesley. ISBN 0-201-02988-X. (accessible to patrons with print disabilities)
Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2006) [1979]. Introduction to Automata Theory, Languages, and Computation (3rd ed.). Addison-Wesley. ISBN 0-321-45536-3.
== Further reading ==
=== General ===
Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge University Press. ISBN 978-0-521-84425-3. Zbl 1188.68177.
Wagner, F., "Modeling Software with Finite State Machines: A Practical Approach", Auerbach Publications, 2006, ISBN 0-8493-8086-3.
ITU-T, Recommendation Z.100 Specification and Description Language (SDL)
Samek, M., Practical Statecharts in C/C++, CMP Books, 2002, ISBN 1-57820-110-1.
Samek, M., Practical UML Statecharts in C/C++, 2nd Edition, Newnes, 2008, ISBN 0-7506-8706-1.
Gardner, T., Advanced State Management Archived 2008-11-19 at the Wayback Machine, 2007
Cassandras, C., Lafortune, S., "Introduction to Discrete Event Systems". Kluwer, 1999, ISBN 0-7923-8609-4.
Timothy Kam, Synthesis of Finite State Machines: Functional Optimization. Kluwer Academic Publishers, Boston 1997, ISBN 0-7923-9842-4
Tiziano Villa, Synthesis of Finite State Machines: Logic Optimization. Kluwer Academic Publishers, Boston 1997, ISBN 0-7923-9892-0
Carroll, J., Long, D., Theory of Finite Automata with an Introduction to Formal Languages. Prentice Hall, Englewood Cliffs, 1989.
Kohavi, Z., Switching and Finite Automata Theory. McGraw-Hill, 1978.
Gill, A., Introduction to the Theory of Finite-state Machines. McGraw-Hill, 1962.
Ginsburg, S., An Introduction to Mathematical Machine Theory. Addison-Wesley, 1962.
=== Finite-state machines (automata theory) in theoretical computer science ===
Arbib, Michael A. (1969). Theories of Abstract Automata (1st ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. ISBN 978-0-13-913368-8.
Bobrow, Leonard S.; Arbib, Michael A. (1974). Discrete Mathematics: Applied Algebra for Computer and Information Science (1st ed.). Philadelphia: W. B. Saunders Company, Inc. ISBN 978-0-7216-1768-8.
Booth, Taylor L. (1967). Sequential Machines and Automata Theory (1st ed.). New York: John Wiley and Sons, Inc. Library of Congress Card Catalog Number 67-25924.
Boolos, George; Jeffrey, Richard (1999) [1989]. Computability and Logic (3rd ed.). Cambridge, England: Cambridge University Press. ISBN 978-0-521-20402-6.
Brookshear, J. Glenn (1989). Theory of Computation: Formal Languages, Automata, and Complexity. Redwood City, California: Benjamin/Cummings Publish Company, Inc. ISBN 978-0-8053-0143-4.
Davis, Martin; Sigal, Ron; Weyuker, Elaine J. (1994). Computability, Complexity, and Languages and Logic: Fundamentals of Theoretical Computer Science (2nd ed.). San Diego: Academic Press, Harcourt, Brace & Company. ISBN 978-0-12-206382-4.
Hopkin, David; Moss, Barbara (1976). Automata. New York: Elsevier North-Holland. ISBN 978-0-444-00249-5.
Kozen, Dexter C. (1997). Automata and Computability (1st ed.). New York: Springer-Verlag. ISBN 978-0-387-94907-9.
Lewis, Harry R.; Papadimitriou, Christos H. (1998). Elements of the Theory of Computation (2nd ed.). Upper Saddle River, New Jersey: Prentice-Hall. ISBN 978-0-13-262478-7.
Linz, Peter (2006). Formal Languages and Automata (4th ed.). Sudbury, MA: Jones and Bartlett. ISBN 978-0-7637-3798-6.
Minsky, Marvin (1967). Computation: Finite and Infinite Machines (1st ed.). New Jersey: Prentice-Hall.
Papadimitriou, Christos (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 978-0-201-53082-7.
Pippenger, Nicholas (1997). Theories of Computability (1st ed.). Cambridge, England: Cambridge University Press. ISBN 978-0-521-55380-3.
Rodger, Susan; Finley, Thomas (2006). JFLAP: An Interactive Formal Languages and Automata Package (1st ed.). Sudbury, MA: Jones and Bartlett. ISBN 978-0-7637-3834-1.
Sipser, Michael (2006). Introduction to the Theory of Computation (2nd ed.). Boston Mass: Thomson Course Technology. ISBN 978-0-534-95097-2.
Wood, Derick (1987). Theory of Computation (1st ed.). New York: Harper & Row, Publishers, Inc. ISBN 978-0-06-047208-5.
=== Abstract state machines in theoretical computer science ===
Gurevich, Yuri (July 2000). "Sequential Abstract State Machines Capture Sequential Algorithms" (PDF). ACM Transactions on Computational Logic. 1 (1): 77–111. CiteSeerX 10.1.1.146.3017. doi:10.1145/343369.343384. S2CID 2031696.
=== Machine learning using finite-state algorithms ===
Mitchell, Tom M. (1997). Machine Learning (1st ed.). New York: WCB/McGraw-Hill Corporation. ISBN 978-0-07-042807-2.
=== Hardware engineering: state minimization and synthesis of sequential circuits ===
Booth, Taylor L. (1967). Sequential Machines and Automata Theory (1st ed.). New York: John Wiley and Sons, Inc. Library of Congress Card Catalog Number 67-25924.
Booth, Taylor L. (1971). Digital Networks and Computer Systems (1st ed.). New York: John Wiley and Sons, Inc. ISBN 978-0-471-08840-0.
McCluskey, E. J. (1965). Introduction to the Theory of Switching Circuits (1st ed.). New York: McGraw-Hill Book Company, Inc. Library of Congress Card Catalog Number 65-17394.
Hill, Fredrick J.; Peterson, Gerald R. (1965). Introduction to the Theory of Switching Circuits (1st ed.). New York: McGraw-Hill Book Company. Library of Congress Card Catalog Number 65-17394.
=== Finite Markov chain processes ===
"We may think of a Markov chain as a process that moves successively through a set of states s1, s2, …, sr. … if it is in state si it moves on to the next stop to state sj with probability pij. These probabilities can be exhibited in the form of a transition matrix" (Kemeny (1959), p. 384)
Finite Markov-chain processes are also known as subshifts of finite type.
Booth, Taylor L. (1967). Sequential Machines and Automata Theory (1st ed.). New York: John Wiley and Sons, Inc. Library of Congress Card Catalog Number 67-25924.
Kemeny, John G.; Mirkil, Hazleton; Snell, J. Laurie; Thompson, Gerald L. (1959). Finite Mathematical Structures (1st ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. Library of Congress Card Catalog Number 59-12841. Chapter 6 "Finite Markov Chains".
== External links ==
Modeling a Simple AI behavior using a Finite State Machine Example of usage in Video Games
Free On-Line Dictionary of Computing description of Finite-State Machines
NIST Dictionary of Algorithms and Data Structures description of Finite-State Machines
A brief overview of state machine types, comparing theoretical aspects of Mealy, Moore, Harel & UML state machines. | Wikipedia/State_transition_function |
A phonograph, later called a gramophone, and since the 1940s a record player, or more recently a turntable, is a device for the mechanical and analogue reproduction of sound. The sound vibration waveforms are recorded as corresponding physical deviations of a helical or spiral groove engraved, etched, incised, or im
pressed into the surface of a rotating cylinder or disc, called a record. To recreate the sound, the surface is similarly rotated while a playback stylus traces the groove and is therefore vibrated by it, faintly reproducing the recorded sound. In early acoustic phonographs, the stylus vibrated a diaphragm that produced sound waves coupled to the open air through a flaring horn, or directly to the listener's ears through stethoscope-type earphones.
The phonograph was invented in 1877 by Thomas Edison; its use would rise the following year. Alexander Graham Bell's Volta Laboratory made several improvements in the 1880s and introduced the graphophone, including the use of wax-coated cardboard cylinders and a cutting stylus that moved from side to side in a zigzag groove around the record. In the 1890s, Emile Berliner initiated the transition from phonograph cylinders to flat discs with a spiral groove running from the periphery to near the centre, coining the term gramophone for disc record players, which is predominantly used in many languages. Later improvements through the years included modifications to the turntable and its drive system, stylus, pickup system, and the sound and equalization systems.
The disc phonograph record was the dominant commercial audio distribution format throughout most of the 20th century, and phonographs became the first example of home audio that people owned and used at their residences. In the 1960s, the use of 8-track cartridges and cassette tapes were introduced as alternatives. By the late 1980s, phonograph use had declined sharply due to the popularity of cassettes and the rise of the compact disc. However, records have undergone a revival since the late 2000s.
== Terminology ==
The terminology used to describe record-playing devices is not uniform across the English-speaking world. In modern contexts, the playback device is often referred to as a "turntable", "record player", or "record changer". Each of these terms denotes distinct items. When integrated into a DJ setup with a mixer, turntables are colloquially known as "decks". In later versions of electric phonographs, commonly known since the 1940s as record players or turntables, the movements of the stylus are transformed into an electrical signal by a transducer. This signal is then converted back into sound through an amplifier and one or more loudspeakers.
The term "phonograph", meaning "sound writing", originates from the Greek words φωνή (phonē, meaning 'sound' or 'voice') and γραφή (graphē, meaning 'writing'). Similarly, the terms "gramophone" and "graphophone" have roots in the Greek words γράμμα (gramma, meaning 'letter') and φωνή (phōnē, meaning 'voice').
In British English, "gramophone" may refer to any sound-reproducing machine that utilizes disc records. These were introduced and popularized in the UK by the Gramophone Company. Initially, "gramophone" was a proprietary trademark of the company, and any use of the name by competing disc record manufacturers was rigorously challenged in court. However, in 1910, an English court ruled that the term had become generic.
=== United States ===
In American English, "phonograph", properly specific to machines made by Edison, was sometimes used in a generic sense as early as the 1890s to include cylinder-playing machines made by others. But it was then considered strictly incorrect to apply it to Emile Berliner's Gramophone, a different machine that played nonrecordable discs (although Edison's original Phonograph patent included the use of discs.)
=== Australia ===
In Australian English, "record player" was the term; "turntable" was a more technical term; "gramophone" was restricted to the old mechanical (i.e., wind-up) players; and "phonograph" was used as in British English. The "phonograph" was first demonstrated in Australia on 14 June 1878 to a meeting of the Royal Society of Victoria by the Society's Honorary Secretary, Alex Sutherland who published "The Sounds of the Consonants, as Indicated by the Phonograph" in the Society's journal in November that year. On 8 August 1878 the phonograph was publicly demonstrated at the Society's annual conversazione, along with a range of other new inventions, including the microphone.
== Early history ==
=== Phonautograph ===
The phonautograph was invented on March 25, 1857, by Frenchman Édouard-Léon Scott de Martinville, an editor and typographer of manuscripts at a scientific publishing house in Paris. One day while editing Professor Longet's Traité de Physiologie, he happened upon that customer's engraved illustration of the anatomy of the human ear, and conceived of "the imprudent idea of photographing the word." In 1853 or 1854 (Scott cited both years) he began working on "le problème de la parole s'écrivant elle-même" ("the problem of speech writing itself"), aiming to build a device that could replicate the function of the human ear.
Scott coated a plate of glass with a thin layer of lampblack. He then took an acoustic trumpet, and at its tapered end affixed a thin membrane that served as the analog to the eardrum. At the center of that membrane, he attached a rigid boar's bristle approximately a centimetre long, placed so that it just grazed the lampblack. As the glass plate was slid horizontally in a well formed groove at a speed of one meter per second, a person would speak into the trumpet, causing the membrane to vibrate and the stylus to trace figures that were scratched into the lampblack. On March 25, 1857, Scott received the French patent #17,897/31,470 for his device, which he called a phonautograph. The earliest known surviving recorded sound of a human voice was conducted on April 9, 1860, when Scott recorded someone singing the song "Au Clair de la Lune" ("By the Light of the Moon") on the device. However, the device was not designed to play back sounds, as Scott intended for people to read back the tracings, which he called phonautograms. This was not the first time someone had used a device to create direct tracings of the vibrations of sound-producing objects, as tuning forks had been used in this way by English physicist Thomas Young in 1807. By late 1857, with support from the Société d'encouragement pour l'industrie nationale, Scott's phonautograph was recording sounds with sufficient precision to be adopted by the scientific community, paving the way for the nascent science of acoustics.
The device's true significance in the history of recorded sound was not fully realized prior to March 2008, when it was discovered and resurrected in a Paris patent office by First Sounds, an informal collaborative of American audio historians, recording engineers, and sound archivists founded to make the earliest sound recordings available to the public. The phonautograms were then digitally converted by scientists at the Lawrence Berkeley National Laboratory in California, who were able to play back the recorded sounds, something Scott had never conceived of. Prior to this point, the earliest known record of a human voice was thought to be an 1877 phonograph recording by Thomas Edison. The phonautograph would play a role in the development of the gramophone, whose inventor, Emile Berliner, worked with the phonautograph in the course of developing his own device.
=== Paleophone ===
Charles Cros, a French poet and amateur scientist, is the first person known to have made the conceptual leap from recording sound as a traced line to the theoretical possibility of reproducing the sound from the tracing and then to devising a definite method for accomplishing the reproduction. On April 30, 1877, he deposited a sealed envelope containing a summary of his ideas with the French Academy of Sciences, a standard procedure used by scientists and inventors to establish priority of conception of unpublished ideas in the event of any later dispute.
An account of his invention was published on October 10, 1877, by which date Cros had devised a more direct procedure: the recording stylus could scribe its tracing through a thin coating of acid-resistant material on a metal surface and the surface could then be etched in an acid bath, producing the desired groove without the complication of an intermediate photographic procedure. The author of this article called the device a phonographe, but Cros himself favored the word paleophone, sometimes rendered in French as voix du passé ('voice of the past').
Cros was a poet of meager means, not in a position to pay a machinist to build a working model, and largely content to bequeath his ideas to the public domain free of charge and let others reduce them to practice, but after the earliest reports of Edison's presumably independent invention crossed the Atlantic he had his sealed letter of April 30 opened and read at the December 3, 1877 meeting of the French Academy of Sciences, claiming due scientific credit for priority of conception.
Throughout the first decade (1890–1900) of commercial production of the earliest crude disc records, the direct acid-etch method first invented by Cros was used to create the metal master discs, but Cros was not around to claim any credit or to witness the humble beginnings of the eventually rich phonographic library he had foreseen. He had died in 1888 at the age of 45.
=== The early phonographs ===
Thomas Edison conceived the principle of recording and reproducing sound between May and July 1877 as a byproduct of his efforts to "play back" recorded telegraph messages and to automate speech sounds for transmission by telephone. His first experiments were with waxed paper. He announced his invention of the first phonograph, a device for recording and replaying sound, on November 21, 1877 (early reports appear in Scientific American and several newspapers in the beginning of November, and an even earlier announcement of Edison working on a "talking-machine" can be found in the Chicago Daily Tribune on May 9), and he demonstrated the device for the first time on November 29 (it was patented on February 19, 1878, as US Patent 200,521). "In December, 1877, a young man came into the office of the Scientific American, and placed before the editors a small, simple machine about which few preliminary remarks were offered. The visitor without any ceremony whatever turned the crank, and to the astonishment of all present the machine said: 'Good morning. How do you do? How do you like the phonograph?' The machine thus spoke for itself, and made known the fact that it was the phonograph..."The music critic Herman Klein attended an early demonstration (1881–82) of a similar machine. On the early phonograph's reproductive capabilities he wrote in retrospect: "It sounded to my ear like someone singing about half a mile away, or talking at the other end of a big hall; but the effect was rather pleasant, save for a peculiar nasal quality wholly due to the mechanism, although there was little of the scratching that later was a prominent feature of the flat disc. Recording for that primitive machine was a comparatively simple matter. I had to keep my mouth about six inches away from the horn and remember not to make my voice too loud if I wanted anything approximating to a clear reproduction; that was all. When it was played over to me and I heard my own voice for the first time, one or two friends who were present said that it sounded rather like mine; others declared that they would never have recognised it. I daresay both opinions were correct."
The Argus newspaper from Melbourne, Australia, reported on an 1878 demonstration at the Royal Society of Victoria, writing "There was a large attendance of ladies and gentlemen, who appeared greatly interested in the various scientific instruments exhibited. Among these the most interesting, perhaps, was the trial made by Mr. Sutherland with the phonograph, which was most amusing. Several trials were made, and were all more or less successful. 'Rule Britannia' was distinctly repeated, but great laughter was caused by the repetition of the convivial song of 'He's a jolly good fellow,' which sounded as if it was being sung by an old man of 80 with a cracked voice."
=== Early machines ===
Edison's early phonographs recorded onto a thin sheet of metal, normally tinfoil, which was temporarily wrapped around a helically grooved cylinder mounted on a correspondingly threaded rod supported by plain and threaded bearings. While the cylinder was rotated and slowly progressed along its axis, the airborne sound vibrated a diaphragm connected to a stylus that indented the foil into the cylinder's groove, thereby recording the vibrations as "hill-and-dale" variations of the depth of the indentation.
=== Introduction of the disc record ===
By 1890, record manufacturers had begun using a rudimentary duplication process to mass-produce their product. While the live performers recorded the master phonograph, up to ten tubes led to blank cylinders in other phonographs. Until this development, each record had to be custom-made. Before long, a more advanced pantograph-based process made it possible to simultaneously produce 90–150 copies of each record. However, as demand for certain records grew, popular artists still needed to re-record and re-re-record their songs. Reportedly, the medium's first major African-American star George Washington Johnson was obliged to perform his "The Laughing Song" (or the separate "The Whistling Coon") up to thousands of times in a studio during his recording career. Sometimes he would sing "The Laughing Song" more than fifty times in a day, at twenty cents per rendition. (The average price of a single cylinder in the mid-1890s was about fifty cents.)
=== Oldest surviving recordings ===
Lambert's lead cylinder recording for an experimental talking clock is often identified as the oldest surviving playable sound recording,
although the evidence advanced for its early date is controversial.
Wax phonograph cylinder recordings of Handel's choral music made on June 29, 1888, at The Crystal Palace in London were thought to be the oldest-known surviving musical recordings, until the recent playback by a group of American historians of a phonautograph recording of Au clair de la lune recorded on April 9, 1860.
The 1860 phonautogram had not until then been played, as it was only a transcription of sound waves into graphic form on paper for visual study. Recently developed optical scanning and image processing techniques have given new life to early recordings by making it possible to play unusually delicate or physically unplayable media without physical contact.
A recording made on a sheet of tinfoil at an 1878 demonstration of Edison's phonograph in St. Louis, Missouri, has been played back by optical scanning and digital analysis. A few other early tinfoil recordings are known to survive, including a slightly earlier one that is believed to preserve the voice of U.S. President Rutherford B. Hayes, but as of May 2014 they have not yet been scanned. These antique tinfoil recordings, which have typically been stored folded, are too fragile to be played back with a stylus without seriously damaging them. Edison's 1877 tinfoil recording of Mary Had a Little Lamb, not preserved, has been called the first instance of recorded verse.
On the occasion of the 50th anniversary of the phonograph, Edison recounted reciting Mary Had a Little Lamb to test his first machine. The 1927 event was filmed by an early sound-on-film newsreel camera, and an audio clip from that film's soundtrack is sometimes mistakenly presented as the original 1877 recording.
Wax cylinder recordings made by 19th-century media legends such as P. T. Barnum and Shakespearean actor Edwin Booth are amongst the earliest verified recordings by the famous that have survived to the present.
== Improvements at the Volta Laboratory ==
Alexander Graham Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work at Bell's Volta Laboratory in Washington, D. C., in 1879, and continued until they were granted basic patents in 1886 for recording in wax.
Although Edison had invented the phonograph in 1877, the fame bestowed on him for this invention was not due to its efficiency. Recording with his tinfoil phonograph was too difficult to be practical, as the tinfoil tore easily, and even when the stylus was properly adjusted, its reproduction of sound was distorted, and good for only a few playbacks; nevertheless Edison had discovered the idea of sound recording. However immediately after his discovery he did not improve it, allegedly because of an agreement to spend the next five years developing the New York City electric light and power system.
=== Volta's early challenge ===
Meanwhile, Bell, a scientist and experimenter at heart, was looking for new worlds to conquer after having patented the telephone. According to Sumner Tainter, it was through Gardiner Green Hubbard that Bell took up the phonograph challenge. Bell had married Hubbard's daughter Mabel in 1879 while Hubbard was president of the Edison Speaking Phonograph Co., and his organization, which had purchased the Edison patent, was financially troubled because people did not want to buy a machine that seldom worked well and proved difficult for the average person to operate.
=== Volta Graphophone ===
The sound vibrations had been indented in the wax that had been applied to the Edison phonograph. The following was the text of one of their recordings: "There are more things in heaven and earth, Horatio, than are dreamed of in your philosophy. I am a Graphophone and my mother was a phonograph." Most of the disc machines designed at the Volta Lab had their disc mounted on vertical turntables. The explanation is that in the early experiments, the turntable, with disc, was mounted on the shop lathe, along with the recording and reproducing heads. Later, when the complete models were built, most of them featured vertical turntables.
One interesting exception was a horizontal seven inch turntable. The machine, although made in 1886, was a duplicate of one made earlier but taken to Europe by Chichester Bell. Tainter was granted U.S. patent 385,886 on July 10, 1888. The playing arm is rigid, except for a pivoted vertical motion of 90 degrees to allow removal of the record or a return to starting position. While recording or playing, the record not only rotated, but moved laterally under the stylus, which thus described a spiral, recording 150 grooves to the inch.
The basic distinction between the Edison's first phonograph patent and the Bell and Tainter patent of 1886 was the method of recording. Edison's method was to indent the sound waves on a piece of tin foil, while Bell and Tainter's invention called for cutting, or "engraving", the sound waves into a wax record with a sharp recording stylus.
=== Graphophone commercialization ===
In 1885, when the Volta Associates were sure that they had a number of practical inventions, they filed patent applications and began to seek out investors. The Volta Graphophone Company of Alexandria, Virginia, was created on January 6, 1886, and incorporated on February 3, 1886. It was formed to control the patents and to handle the commercial development of their sound recording and reproduction inventions, one of which became the first Dictaphone.
After the Volta Associates gave several demonstrations in the City of Washington, businessmen from Philadelphia created the American Graphophone Company on March 28, 1887, in order to produce and sell the machines for the budding phonograph marketplace. The Volta Graphophone Company then merged with American Graphophone, which itself later evolved into Columbia Records.
A coin-operated version of the Graphophone, U.S. patent 506,348, was developed by Tainter in 1893 to compete with nickel-in-the-slot entertainment phonograph U.S. patent 428,750 demonstrated in 1889 by Louis T. Glass, manager of the Pacific Phonograph Company.
The work of the Volta Associates laid the foundation for the successful use of dictating machines in business, because their wax recording process was practical and their machines were durable. But it would take several more years and the renewed efforts of Edison and the further improvements of Emile Berliner and many others, before the recording industry became a major factor in home entertainment.
The technology quickly became popular abroad, where it was also used in new ways. In 1895, for example, Hungary became the first country to use phonographs to conduct folklore and ethnomusicological research, after which it became common practice in ethnography.
== Disc vs. cylinder as a recording medium ==
Discs are not inherently better than cylinders at providing audio fidelity. Rather, the advantages of the format are seen in the manufacturing process: discs can be stamped, and the matrixes to stamp disc can be shipped to other printing plants for a global distribution of recordings; cylinders could not be stamped until 1901–1902, when the gold moulding process was introduced by Edison.
Through experimentation, in 1892, Berliner began commercial production of his disc records and "gramophones". His "phonograph record" was the first disc record to be offered to the public. They were five inches (13 cm) in diameter and recorded on one side only. Seven-inch (17.5 cm) records followed in 1895. The same year, Berliner replaced the hard rubber used to make the discs with a shellac compound. Berliner's early records had poor sound quality, however. Work by Eldridge R. Johnson eventually improved the sound fidelity to a point where it was as good as the cylinder.
Wax cylinders would continue to be used into the 1920s, with New York City-based Czech immigrant, businessman, and inventor Alois Benjamin Saliger using cylinders for his "Psycho-Phone" or "Psychophone", a specialized phonograph or gramophone that Saliger intended to be used in the field of psychology. Invented in 1927 for sleep learning, the Psychophone featured a clock mounted on top of a phonograph, with a repeater device for rewinding and continuously replaying records. While Edison machines had a spring-powered motor, powered by crank on the side, Psychophone models featured an electric-powered motor. Saliger patented the device in 1932 as the "automatic time-controlled suggestion machine".
== Dominance of the disc record ==
In the 1930s, vinyl (originally known as vinylite) was introduced as a record material for radio transcription discs, and for radio commercials. At that time, virtually no discs for home use were made from this material. Vinyl was used for the popular 78-rpm V-discs issued to US soldiers during World War II. This significantly reduced breakage during transport. The first commercial vinylite record was the set of five 12" discs "Prince Igor" (Asch Records album S-800, dubbed from Soviet masters in 1945). Victor began selling some home-use vinyl 78s in late 1945; but most 78s were made of a shellac compound until the 78-rpm format was completely phased out. (Shellac records were heavier and more brittle.) 33s and 45s were, however, made exclusively of vinyl, with the exception of some 45s manufactured out of polystyrene.
=== First all-transistor phonograph ===
In 1955, Philco developed and produced the world's first all-transistor phonograph models TPA-1 and TPA-2, which were announced in the June 28, 1955 edition of The Wall Street Journal. Philco started to sell these all-transistor phonographs in the fall of 1955, for the price of $59.95. The October 1955 issue of Radio & Television News magazine (page 41), had a full page detailed article on Philco's new consumer product. The all-transistor portable phonograph TPA-1 and TPA-2 models played only 45rpm records and used four 1.5 volt "D" batteries for their power supply. The "TPA" stands for "Transistor Phonograph Amplifier". Their circuitry used three Philco germanium PNP alloy-fused junction audio frequency transistors. After the 1956 season had ended, Philco decided to discontinue both models, for transistors were too expensive compared to vacuum tubes, but by 1961 a $49.95 ($525.59 in 2023) portable, battery-powered radio-phonograph with seven transistors was available.
== Turntable designs ==
There are presently three main phonograph designs: belt-drive, direct-drive, and idler-wheel.
In a belt-drive turntable the motor is located off-center from the platter, either underneath it or entirely outside of it, and is connected to the platter or counter-platter by a drive belt made from elastomeric material.
The direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic). In 1969, Matsushita released it as the Technics SP-10, the first direct-drive turntable on the market. The most influential direct-drive turntable was the Technics SL-1200, which, following the spread of turntablism in hip hop culture, became the most widely-used turntable in DJ culture for several decades.
== Arm systems ==
In some high quality equipment the arm carrying the pickup, known as a tonearm, is manufactured separately from the motor and turntable unit. Companies specialising in the manufacture of tonearms include the English company SME.
=== Cue lever ===
More sophisticated turntables were (and still are) frequently manufactured so as to incorporate a "cue lever", a device that mechanically lowers the tonearm on to the record. It enables the user to locate an individual track more easily, to pause a record, and to avoid the risk of scratching the record, which may require practice to avoid when lowering the tonearm manually.
=== Linear tracking ===
Early developments in linear turntables were from Rek-O-Kut (portable lathe/phonograph) and Ortho-Sonic in the 1950s, and Acoustical in the early 1960s. These were eclipsed by more successful implementations of the concept from the late 1960s through the early 1980s.
== Pickup systems ==
The pickup, or cartridge, is a transducer that converts mechanical vibrations from a stylus into an electrical signal. The electrical signal is amplified and converted into sound by one or more loudspeakers. Crystal and ceramic pickups that use the piezoelectric effect have largely been replaced by magnetic cartridges.
The pickup includes a stylus with a small diamond or sapphire tip that runs in the record groove. The stylus eventually becomes worn by contact with the groove, and it is usually replaceable.
Styli are classified as spherical or elliptical, although the tip is actually shaped as a half-sphere or a half-ellipsoid. Spherical styli are generally more robust than other types, but do not follow the groove as accurately, giving diminished high frequency response. Elliptical styli usually track the groove more accurately, with increased high frequency response and less distortion. For DJ use, the relative robustness of spherical styli make them generally preferred for back-cuing and scratching. There are a number of derivations of the basic elliptical type, including the Shibata or fine line stylus, which can more accurately reproduce high frequency information contained in the record groove. This is especially important for playback of quadraphonic recordings.
=== Optical readout ===
A few specialist laser turntables read the groove optically using a laser pickup. Since there is no physical contact with the record, no wear is incurred. However, this advantage is debatable, since vinyl records have been tested to withstand even 1200 plays with no significant audio degradation, provided that it is played with a high-quality cartridge and that the surfaces are clean.
An alternative approach is to take a high-resolution photograph or scan of each side of the record and interpret the image of the grooves using computer software. An amateur attempt using a flatbed scanner lacked satisfactory fidelity. A professional system employed by the Library of Congress produces excellent quality.
== Stylus ==
A development in stylus form came about by the attention to the CD-4 quadraphonic sound modulation process, which requires up to 50 kHz frequency response, with cartridges like Technics EPC-100CMK4 capable of playback on frequencies up to 100 kHz. This requires a stylus with a narrow side radius, such as 5 micrometres (0.2 mils). A narrow-profile elliptical stylus is able to read the higher frequencies (greater than 20 kHz), but at an increased wear, since the contact surface is narrower. For overcoming this problem, the Shibata stylus was invented around 1972 in Japan by Norio Shibata of JVC.
The Shibata-designed stylus offers a greater contact surface with the groove, which in turn means less pressure over the vinyl surface and thus less wear. A positive side effect is that the greater contact surface also means the stylus reads sections of the vinyl that were not worn by the common spherical stylus. In a demonstration by JVC records worn after 500 plays at a relatively high 4.5 g tracking force with a spherical stylus, played perfectly with the Shibata profile.
Other advanced stylus shapes appeared following the same goal of increasing contact surface, improving on the Shibata. Chronologically: "Hughes" Shibata variant (1975), "Ogura" (1978), Van den Hul (1982). Such a stylus may be marketed as "Hyperelliptical" (Shure), "Alliptic", "Fine Line" (Ortofon), "Line contact" (Audio Technica), "Polyhedron", "LAC", or "Stereohedron" (Stanton).
A keel-shaped diamond stylus appeared as a byproduct of the invention of the CED Videodisc. This, together with laser-diamond-cutting technologies, made possible the "ridge" shaped stylus, such as the Namiki (1985) design, and Fritz Gyger (1989) design. This type of stylus is marketed as "MicroLine" (Audio technica), "Micro-Ridge" (Shure), or "Replicant" (Ortofon).
To address the problem of steel needle wear upon records, which resulted in the cracking of the latter, RCA Victor devised unbreakable records in 1930, by mixing polyvinyl chloride with plasticisers, in a proprietary formula they called Victrolac, which was first used in 1931, in motion picture discs.
== Equalization ==
Since the late 1950s, almost all phono input stages have used the RIAA equalization standard. Before settling on that standard, there were many different equalizations in use, including EMI, His Master's Voice, Columbia, Decca FFRR, NAB, Ortho, BBC transcription, etc. Recordings made using these other equalization schemes typically sound odd if they are played through a RIAA-equalized preamplifier. High-performance (so-called "multicurve disc") preamplifiers, which include multiple, selectable equalizations, are no longer commonly available. However, some vintage preamplifiers, such as the LEAK varislope series, are still obtainable and can be refurbished. Newer preamplifiers like the Esoteric Sound Re-Equalizer or the K-A-B MK2 Vintage Signal Processor are also available.
== Contemporary use and models ==
Although largely replaced since the introduction of the compact disc in 1982, record albums still sold in small numbers throughout the 1980s and 1990s, but gradually sidelined in favor of CD players and tape decks in home audio environments. Record players continued to be manufactured and sold into the 21st century, although in small numbers and mainly for DJs. Following a resurgence in sales of records since the late 2000s, an increasing number of turntables have been manufactured and sold. Notably, Japanese company Panasonic brought back its well-known advanced Technics SL-1200 at the 2016 Consumer Electronics Show during which Sony also headlined a turntable, amid increasing interest in the format. Similarly, Audio-Technica revived its 1980s Sound Burger portable player in 2023.
At the low-end of the market, Crosley has been especially popular with its suitcase record players and have played a big part in the vinyl revival and its adoption among younger people and children in the 2010s.
New interest in records has led to the development of turntables with additional modern features. USB turntables have a built-in audio interface, which transfers the analog sound directly to the connected computer. Some USB turntables transfer the audio without equalization, but are sold with software that allows the EQ of the transferred audio file to be adjusted. There are also many turntables on the market designed to be plugged into a computer via a USB port for needle dropping purposes.
Modern turntables have also been released featuring Bluetooth technology to output a record's sound wirelessly through speakers. Sony have also released a high-end turntable with an analog-to-digital converter to convert the sound from a playing record into a 24-bit high-resolution audio file in DSD or WAV formats.
== See also ==
Phonograph record
Phonograph cylinder
Archéophone, used to convert diverse types of cylinder recordings to modern, discrete recording formats
Audio signal processing
Compressed air gramophone
List of phonograph manufacturers
Talking Machine World
Vinyl killer
Turntablism
== Notes ==
== References ==
== Further reading ==
Bruil, Rudolf A. (January 8, 2004). "Linear Tonearms Archived 2011-10-17 at the Wayback Machine." Retrieved on July 25, 2011.
Gelatt, Roland. The Fabulous Phonograph, 1877–1977. Second rev. ed., [being also the] First Collier Books ed., in series, Sounds of the Century. New York: Collier, 1977. 349 p., ill. ISBN 0-02-032680-7
Heumann, Michael. "Metal Machine Music: The Phonograph's Voice and the Transformation of Writing." eContact! 14.3 — Turntablism (January 2013). Montréal: CEC.
Koenigsberg, Allen. The Patent History of the Phonograph, 1877–1912. APM Press, 1991.
Reddie, Lovell N. (1908). "The Gramophone And The Mechanical Recording And Reproduction Of Musical Sounds". Annual Report of the Board of Regents of the Smithsonian Institution: 209–231. Retrieved 2009-08-07.
Various. "Turntable [wiki]: Bibliography." eContact! 14.3 — Turntablism (January 2013). Montréal: CEC.
Weissenbrunner, Karin. "Experimental Turntablism: Historical overview of experiments with record players / records — or Scratches from Second-Hand Technology." eContact! 14.3 — Turntablism (January 2013). Montréal: CEC.
Carson, B. H.; Burt, A. D.; Reiskind, and H. I., "A Record Changer And Record Of Complementary Design", RCA Review, June 1949
== External links ==
c.1915 Swiss hot-air engined gramophone at Museum of Retro Technology
Interactive sculpture delivers tactile soundwave experience Archived 2021-03-08 at the Wayback Machine
Early recordings from around the world
The Birth of the Recording Industry
The Cylinder Archive
The Berliner Sound and Image Archive
Cylinder Preservation & Digitization Project – Over 6,000 cylinder recordings held by the Department of Special Collections, University of California, Santa Barbara, free for download or streamed online.
Cylinder players held at the British Library Archived 2012-02-06 at the Wayback Machine – information and high-quality images.
History of Recorded Sound: Phonographs and Records
EnjoytheMusic.com – Excerpts from the book Hi-Fi All-New 1958 Edition
Listen to early recordings on the Edison Phonograph
Mario Frazzetto's Phonograph and Gramophone Gallery.
Say What? – Essay on phonograph technology and intellectual property law
Vinyl Engine – Information, images, articles and reviews from around the world
The Analogue Dept – Information, images and tutorials; strongly focused on Thorens brand
45 rpm player and changer at work on YouTube
Historic video footage of Edison operating his original tinfoil phonograph
Turntable History on Enjoy the Music.com
2-point and Arc Protractor generators on AlignmentProtractor.com | Wikipedia/Phonograph |
A linear integrated circuit or analog chip is a set of miniature electronic analog circuits formed on a single piece of semiconductor material.
== Description ==
The voltage and current at specified points in the circuits of analog chips vary continuously over time. In contrast, digital chips only assign meaning to voltages or currents at discrete levels. In addition to transistors, analog chips often include a larger number of passive elements (capacitors, resistors, and inductors) than digital chips. Inductors tend to be avoided because of their large physical size, and difficulties incorporating them into monolithic semiconductor ICs. Certain circuits such as gyrators can often act as equivalents of inductors, while constructed only from transistors and capacitors.
Analog chips may also contain digital logic elements to replace some analog functions, or to allow the chip to communicate with a microprocessor. For this reason, and since logic is commonly implemented using CMOS technology, these chips typically use BiCMOS processes, as implemented by companies such as Freescale, Texas Instruments, STMicroelectronics, and others. This is known as "mixed signal processing", and allows a designer to incorporate more functions into a single chip. Some of the benefits of this mixed technology include load protection, reduced parts count, and higher reliability.
Purely analog chips in information processing have been mostly replaced with digital chips. Analog chips are still required for wideband signals, high-power applications, and transducer interfaces. Research and industry in this specialty continues to grow and prosper. Some examples of long-lived and well-known analog chips are the 741 operational amplifier, and the 555 timer IC.
Power supply chips are also considered to be analog chips. Their main purpose is to produce a well-regulated output voltage supply for other chips in the system. Since all electronic systems require electrical power, power supply ICs (power management integrated circuits, PMIC) are important elements of those systems.
Important basic building blocks of analog chip design include:
current source
current mirror
differential amplifier
voltage reference, bandgap voltage reference
All the above circuit building blocks can be implemented using bipolar technology as well as metal-oxide-silicon (MOS) technology. MOS band gap references use lateral bipolar transistors for their functioning.
People who have specialized in this field include Bob Widlar, Bob Pease, Hans Camenzind, George Erdi, Jim Williams, and Barrie Gilbert, among others.
== See also ==
List of linear integrated circuits
List of LM-series integrated circuits
4000-series integrated circuits
List of 4000-series integrated circuits
7400-series integrated circuits
List of 7400-series integrated circuits
== References ==
Intelligent Power and Sensing Technologies
CMOS Oscillators (AN-118)
CMOS Schmitt Trigger—A Uniquely Versatile Design Component (AN-140)
HCMOS Crystal Oscillators (AN-340)
== Further reading ==
Designing Analog Chips; Hans Camenzind; Virtual Bookworm; 244 pages; 2005; ISBN 978-1589397187. (Free Book) | Wikipedia/Linear_integrated_circuit |
A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components that form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits use photons (or particles of light) as opposed to electrons that are used by electronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the visible spectrum or near-infrared (850–1650 nm).
One of the most commercially utilized material platforms for photonic integrated circuits is indium phosphide (InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-section distributed Bragg reflector (DBR) lasers, consisting of two independently controlled device sections—a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip. Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, the Eindhoven University of Technology, and the University of Twente in the Netherlands.
A 2005 development showed that silicon can, even though it is an indirect bandgap material, still be used to generate laser light via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source.
== History ==
Photonics is the science behind the detection, generation, and manipulation of photons. According to quantum mechanics and the concept of wave–particle duality first proposed by Albert Einstein in 1905, light acts as both an electromagnetic wave and a particle. For example, total internal reflection in an optical fibre allows it to act as a waveguide.
Integrated circuits using electrical components were first developed in the late 1940s and early 1950s, but it took until 1958 for them to become commercially available. When the laser and laser diode were invented in the 1960s, the term "photonics" fell into more common usage to describe the application of light to replace applications previously achieved through the use of electronics.
By the 1980s, photonics gained traction through its role in fibre optic communication. At the start of the decade, an assistant in a new research group at Delft University Of Technology, Meint Smit, started pioneering in the field of integrated photonics. He is credited with inventing the Arrayed Waveguide Grating (AWG), a core component of modern digital connections for the Internet and phones. Smit has received several awards, including an ERC Advanced Grant, a Rank Prize for Optoelectronics and a LEOS Technical Achievement Award.
In October 2022, during an experiment held at the Technical University of Denmark in Copenhagen, a photonic chip transmitted 1.84 petabits per second of data over a fibre-optic cable more than 7.9 kilometres long. First, the data stream was split into 37 sections, each of which was sent down a separate core of the fibre-optic cable. Next, each of these channels was split into 223 parts corresponding to equidistant spikes of light across the spectrum.
== Comparison to electronic integration ==
Unlike electronic integration where silicon is the dominant material, system photonic integrated circuits have been fabricated from a variety of material systems, including electro-optic crystals such as lithium niobate, silica on silicon, silicon on insulator, various polymers, and semiconductor materials which are used to make semiconductor lasers such as GaAs and InP. The different material systems are used because they each provide different advantages and limitations depending on the function to be integrated. For instance, silica (silicon dioxide) based PICs have very desirable properties for passive photonic circuits such as AWGs (see below) due to their comparatively low losses and low thermal sensitivity, GaAs or InP based PICs allow the direct integration of light sources and Silicon PICs enable co-integration of the photonics with transistor based electronics.
The fabrication techniques are similar to those used in electronic integrated circuits in which photolithography is used to pattern wafers for etching and material deposition. Unlike electronics where the primary device is the transistor, there is no single dominant device. The range of devices required on a chip includes low loss interconnect waveguides, power splitters, optical amplifiers, optical modulators, filters, lasers and detectors. These devices require a variety of different materials and fabrication techniques making it difficult to realize all of them on a single chip.
Newer techniques using resonant photonic interferometry is making way for UV LEDs to be used for optical computing requirements with much cheaper costs leading the way to petahertz consumer electronics.
== Examples of photonic integrated circuits ==
The primary application for photonic integrated circuits is in the area of fiber-optic communication though applications in other fields such as biomedical and photonic computing are also possible.
The arrayed waveguide gratings (AWGs) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fiber-optic communication systems are an example of a photonic integrated circuit which has replaced previous multiplexing schemes which utilized multiple discrete filter elements. Since separating optical modes is a need for quantum computing, this technology may be helpful to miniaturize quantum computers (see linear optical quantum computing).
Another example of a photonic integrated chip in wide use today in fiber-optic communication systems is the externally modulated laser (EML) which combines a distributed feed back laser diode with an electro-absorption modulator on a single InP based chip.
== Applications ==
As global data consumption rises and demand for faster networks continues to grow, the world needs to find more sustainable solutions to the energy crisis and climate change. At the same time, ever more innovative applications for sensor technology, such as Lidar in autonomous driving vehicles, appear on the market. There is a need to keep pace with technological challenges.
The expansion of 5G data networks and data centres, safer autonomous driving vehicles, and more efficient food production cannot be sustainably met by electronic microchip technology alone. However, combining electrical devices with integrated photonics provides a more energy efficient way to increase the speed and capacity of data networks, reduce costs and meet an increasingly diverse range of needs across various industries.
=== Data and telecommunications ===
The primary application for PICs is in the area of fibre-optic communication. The arrayed waveguide grating (AWG) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fibre-optic communication systems are an example of a photonic integrated circuit. Another example in fibre-optic communication systems is the externally modulated laser (EML) which combines a distributed feedback laser diode with an electro-absorption modulator.
The PICs can also increase bandwidth and data transfer speeds by deploying few-modes optical planar waveguides. Especially, if modes can be easily converted from conventional single-mode planar waveguides into few-mode waveguides, and selectively excite the desired modes. For example, a bidirectional spatial mode slicer and combiner can be used to achieve the desired higher or lower-order modes. Its principle of operation depends on cascading stages of V-shape and/ or M-shape graded-index planar waveguides.
Not only can PICs increase bandwidth and data transfer speeds, they can reduce energy consumption in data centres, which spend a large proportion of energy on cooling servers.
=== Healthcare and medicine ===
Using advanced biosensors and creating more affordable diagnostic biomedical instruments, integrated photonics opens the door to lab-on-a-chip (LOC) technology, cutting waiting times, and taking diagnosis out of laboratories and into the hands of doctors and patients. Based on an ultrasensitive photonic biosensor, SurfiX Diagnostics' diagnostics platform provides a variety of point-of-care tests. Similarly, Amazec Photonics has developed a fibre optic sensing technology with photonic chips which enables high-resolution temperature sensing (fractions of 0.1 milliKelvin) without having to inject the temperature sensor within the body. This way, medical specialists are able to measure both cardiac output and circulating blood volume from outside the body. Another example of optical sensor technology is EFI's "OptiGrip" device, which offers greater control over tissue feeling for minimal invasive surgery.
=== Automotive and engineering applications ===
PICs can be applied in sensor systems, like Lidar (which stands for light detection and ranging), to monitor the surroundings of vehicles. It can also be deployed in-car connectivity through Li-Fi, which is similar to WiFi but uses light. This technology facilitates communication between vehicles and urban infrastructure to improve driver safety. For example, some modern vehicles pick up traffic signs and remind the driver of the speed limit.
In terms of engineering, fibre optic sensors can be used to detect different quantities, such as pressure, temperature, vibrations, accelerations, and mechanical strain. Sensing technology from PhotonFirst uses integrated photonics to measure things like shape changes in aeroplanes, electric vehicle battery temperature, and infrastructure strain.
=== Agriculture and food ===
Sensors play a role in innovations in agriculture and the food industry in order to reduce wastage and detect diseases. Light sensing technology powered by PICs can measure variables beyond the range of the human eye, allowing the food supply chain to detect disease, ripeness and nutrients in fruit and plants. It can also help food producers to determine soil quality and plant growth, as well as measuring CO2 emissions. A new, miniaturised, near-infrared sensor, developed by MantiSpectra, is small enough to fit into a smartphone, and can be used to analyse chemical compounds of products like milk and plastics.
=== AI applications ===
In 2025, researchers at Columbia Engineering developed a 3D photonic-electronic chip that could significantly improve AI hardware. By combining light-based data movement with CMOS electronics, this chip addressed AI's energy and data transfer bottlenecks, improving both efficiency and bandwidth. The breakthrough allowed for high-speed, energy-efficient data communication, enabling AI systems to process vast amounts of data with minimal power. With a bandwidth of 800 Gb/s and a density of 5.3 Tb/s/mm², this technology offered major advances for AI, autonomous vehicles, and high-performance computing.
== Types of fabrication and materials ==
The fabrication techniques are similar to those used in electronic integrated circuits, in which photolithography is used to pattern wafers for etching and material deposition.
The platforms considered most versatile are indium phosphide (InP) and silicon photonics (SiPh):
Indium phosphide (InP) PICs have active laser generation, amplification, control, and detection. This makes them an ideal component for communication and sensing applications.
Silicon nitride (SiN) PICs have a vast spectral range and ultra low-loss waveguide. This makes them highly suited to detectors, spectrometers, biosensors, and quantum computers. The lowest propagation losses reported in SiN (0.1 dB/cm down to 0.1 dB/m) have been achieved by LioniX International's TriPleX waveguides.
Silicon photonics (SiPh) PICs provide low losses for passive components like waveguides and can be used in minuscule photonic circuits. They are compatible with existing electronic fabrication.
The term "silicon photonics" actually refers to the technology rather than the material. It combines high density photonic integrated circuits (PICs) with complementary metal oxide semiconductor (CMOS) electronics fabrication. The most technologically mature and commercially used platform is silicon on insulator (SOI).
Other platforms include:
Lithium niobate (LiNbO3) is an ideal modulator for low loss mode. It is highly effective at matching fibre input–output due to its low index and broad transparency window. For more complex PICs, lithium niobate can be formed into large crystals. As part of project ELENA, there is a European initiative to stimulate production of LiNbO3-PICs. Attempts are also being made to develop lithium niobate on insulator (LNOI).
Silica has a low weight and small form factor. It is a common component of optical communication networks, such as planar light wave circuits (PLCs).
Gallium arsenide (GaAS) has high electron mobility. This means GaAS transistors operate at high speeds, making them ideal analogue integrated circuit drivers for high speed lasers and modulators.
By combining and configuring different chip types (including existing electronic chips) in a hybrid or heterogeneous integration, it is possible to leverage the strengths of each. Taking this complementary approach to integration addresses the demand for increasingly sophisticated energy-efficient solutions.
== Current status ==
As of 2010, photonic integration was an active topic in U.S. Defense contracts. It was included by the Optical Internetworking Forum for inclusion in 100 gigahertz optical networking standards.
A recent study presents a novel two-dimensional photonic crystal design for electro-reflective modulators, offering reduced size and enhanced efficiency compared to traditional bulky structures. This design achieves high optical transmission ratios with precise angle control, addressing critical challenges in miniaturizing optoelectronic devices for improved performance in PICs. In this structure, both lateral and vertical fabrication technologies are combined, introducing a novel approach that merges two-dimensional designs with three-dimensional structures. This hybrid technique offers new possibilities for enhancing the functionality and integration of photonic components within photonic integrated circuits.
== See also ==
Integrated quantum photonics
Optical computing
Optical transistor
Silicon photonics
== Notes ==
== References ==
Larry Coldren; Scott Corzine; Milan Mashanovitch (2012). Diode Lasers and Photonic Integrated Circuits (Second ed.). John Wiley and Sons. ISBN 9781118148181.
McAulay, Alastair D. (1999). Optical Computer Architectures: The Application of Optical Concepts to Next Generation Computers.
Guha, A.; Ramnarayan, R.; Derstine, M. (1987). "Architectural issues in designing symbolic processors in optics". Proceedings of the 14th annual international symposium on Computer architecture - ISCA '87. p. 145. doi:10.1145/30350.30367. ISBN 0818607769. S2CID 14228669.
Altera Corporation (2011). "Overcome Copper Limits with Optical Interfaces" (PDF).
Brenner, K.-H.; Huang, Alan (1986). "Logic and architectures for digital optical computers (A)". J. Opt. Soc. Am. A3: 62. Bibcode:1986JOSAA...3...62B.
Brenner, K.-H. (1988). "A programmable optical processor based on symbolic substitution". Appl. Opt. 27 (9): 1687–1691. Bibcode:1988ApOpt..27.1687B. doi:10.1364/AO.27.001687. PMID 20531637. S2CID 43648075. | Wikipedia/Photonic_integrated_circuit |
Explicit data graph execution, or EDGE, is a type of instruction set architecture (ISA) which intends to improve computing performance compared to common processors like the Intel x86 line. EDGE combines many individual instructions into a larger group known as a "hyperblock". Hyperblocks are designed to be able to easily run in parallel.
Parallelism of modern CPU designs generally starts to plateau at about eight internal units and from one to four "cores", EDGE designs intend to support hundreds of internal units and offer processing speeds hundreds of times greater than existing designs. Major development of the EDGE concept had been led by the University of Texas at Austin under DARPA's Polymorphous Computing Architectures program, with the stated goal of producing a single-chip CPU design with 1 TFLOPS performance by 2012, which has yet to be realized as of 2018.
== Traditional designs ==
Almost all computer programs consist of a series of instructions that convert data from one form to another. Most instructions require several internal steps to complete an operation. Over time, the relative performance and cost of the different steps have changed dramatically, resulting in several major shifts in ISA design.
=== CISC to RISC ===
In the 1960s memory was relatively expensive, and CPU designers produced instruction sets that densely encoded instructions and data in order to better utilize this resource. For instance, the add A to B to produce C instruction would be provided in many different forms that would gather A and B from different places; main memory, indexes, or registers. Providing these different instructions allowed the programmer to select the instruction that took up the least possible room in memory, reducing the program's needs and leaving more room for data. For instance, the MOS 6502 has eight instructions (opcodes) for performing addition, differing only in where they collect their operands.
Actually making these instructions work required circuitry in the CPU, which was a significant limitation in early designs and required designers to select just those instructions that were really needed. In 1964, IBM introduced its System/360 series which used microcode to allow a single expansive instruction set architecture (ISA) to run across a wide variety of machines by implementing more or less instructions in hardware depending on the need. This allowed the 360's ISA to be expansive, and this became the paragon of computer design in the 1960s and 70s, the so-called orthogonal design. This style of memory access with wide variety of modes led to instruction sets with hundreds of different instructions, a style known today as CISC (Complex Instruction Set Computing).
In 1975 IBM started a project to develop a telephone switch that required performance about three times that of their fastest contemporary computers. To reach this goal, the development team began to study the massive amount of performance data IBM had collected over the last decade. This study demonstrated that the complex ISA was in fact a significant problem; because only the most basic instructions were guaranteed to be implemented in hardware, compilers ignored the more complex ones that only ran in hardware on certain machines. As a result, the vast majority of a program's time was being spent in only five instructions. Further, even when the program called one of those five instructions, the microcode required a finite time to decode it, even if it was just to call the internal hardware. On faster machines, this overhead was considerable.
Their work, known at the time as the IBM 801, eventually led to the RISC (Reduced Instruction Set Computing) concept. Microcode was removed, and only the most basic versions of any given instruction were put into the CPU. Any more complex code was left to the compiler. The removal of so much circuitry, about 1⁄3 of the transistors in the Motorola 68000 for instance, allowed the CPU to include more registers, which had a direct impact on performance. By the mid-1980s, further developed versions of these basic concepts were delivering performance as much as 10 times that of the fastest CISC designs, in spite of using less-developed fabrication.
=== Internal parallelism ===
In the 1990s the chip design and fabrication process grew to the point where it was possible to build a commodity processor with every potential feature built into it. Units that were previously on separate chips, like floating point units and memory management units, were now able to be combined onto the same die, producing all-in one designs. This allows different types of instructions to be executed at the same time, improving overall system performed. In the later 1990s, single instruction, multiple data (SIMD) units were also added, and more recently, AI accelerators.
While these additions improve overall system performance, they do not improve the performance of programs which are primarily operating on basic logic and integer math, which is the majority of programs (one of the outcomes of Amdahl's law). To improve performance on these tasks, CPU designs started adding internal parallelism, becoming "superscalar". In any program there are instructions that work on unrelated data, so by adding more functional units these instructions can be run at the same time. A new portion of the CPU, the scheduler, looks for these independent instructions and feeds them into the units, taking their outputs and re-ordering them so externally it appears they ran in succession.
The amount of parallelism that can be extracted in superscalar designs is limited by the number of instructions that the scheduler can examine for interdependencies. Examining a greater number of instructions can improve the chance of finding an instruction that can be run in parallel, but only at the cost of increasing the complexity of the scheduler itself. Despite massive efforts, CPU designs using classic RISC or CISC ISA's plateaued by the late 2000s. Intel's Haswell designs of 2013 have a total of eight dispatch units, and adding more results in significantly complicating design and increasing power demands.
Additional performance can be wrung from systems by examining the instructions to find ones that operate on different types of data and adding units dedicated to that sort of data; this led to the introduction of on-board floating point units in the 1980s and 90s and, more recently, single instruction, multiple data (SIMD) units. The drawback to this approach is that it makes the CPU less generic; feeding the CPU with a program that uses almost all floating point instructions, for instance, will bog the FPUs while the other units sit idle.
A more recent problem in modern CPU designs is the delay talking to the registers. In general terms the size of the CPU die has remained largely the same over time, while the size of the units within the CPU has grown much smaller as more and more units were added. That means that the relative distance between any one function unit and the global register file has grown over time. Once introduced in order to avoid delays in talking to main memory, the global register file has itself become a delay that is worth avoiding.
=== A new ISA? ===
Just as the delays talking to memory while its price fell suggested a radical change in ISA (Instruction Set Architecture) from CISC to RISC, designers are considering whether the problems scaling in parallelism and the increasing delays talking to registers demands another switch in basic ISA.
Among the ways to introduce a new ISA are the very long instruction word (VLIW) architectures, typified by the Itanium. VLIW moves the scheduler logic out of the CPU and into the compiler, where it has much more memory and longer timelines to examine the instruction stream. This static placement, static issue execution model works well when all delays are known, but in the presence of cache latencies, filling instruction words has proven to be a difficult challenge for the compiler. An instruction that might take five cycles if the data is in the cache could take hundreds if it is not, but the compiler has no way to know whether that data will be in the cache at runtime – that's determined by overall system load and other factors that have nothing to do with the program being compiled.
The key performance bottleneck in traditional designs is that the data and the instructions that operate on them are theoretically scattered about memory. Memory performance dominates overall performance, and classic dynamic placement, dynamic issue designs seem to have reached the limit of their performance capabilities. VLIW uses a static placement, static issue model, but has proven difficult to master because the runtime behavior of programs is difficult to predict and properly schedule in advance.
== EDGE ==
=== Theory ===
EDGE architectures are a new class of ISA's based on a static placement, dynamic issue design. EDGE systems compile source code into a form consisting of statically allocated hyperblocks containing many individual instructions, hundreds or thousands. These hyperblocks are then scheduled dynamically by the CPU. EDGE thus combines the advantages of the VLIW concept of looking for independent data at compile time, with the superscalar RISC concept of executing the instructions when the data for them becomes available.
In the vast majority of real-world programs, the linkage of data and instructions is both obvious and explicit. Programs are divided into small blocks referred to as subroutines, procedures or methods (depending on the era and the programming language being used) which generally have well-defined entrance and exit points where data is passed in or out. This information is lost as the high level language is converted into the processor's much simpler ISA. But this information is so useful that modern compilers have generalized the concept as the "basic block", attempting to identify them within programs while they optimize memory access through the registers. A block of instructions does not have control statements but can have predicated instructions. The dataflow graph is encoded using these blocks, by specifying the flow of data from one block of instructions to another, or to some storage area.
The basic idea of EDGE is to directly support and operate on these blocks at the ISA level. Since basic blocks access memory in well-defined ways, the processor can load up related blocks and schedule them so that the output of one block feeds directly into the one that will consume its data. This eliminates the need for a global register file, and simplifies the compiler's task in scheduling access to the registers by the program as a whole – instead, each basic block is given its own local registers and the compiler optimizes access within the block, a much simpler task.
EDGE systems bear a strong resemblance to dataflow languages from the 1960s–1970s, and again in the 1990s. Dataflow computers execute programs according to the "dataflow firing rule", which stipulates that an instruction may execute at any time after its operands are available. Due to the isolation of data, similar to EDGE, dataflow languages are inherently parallel, and interest in them followed the more general interest in massive parallelism as a solution to general computing problems. Studies based on existing CPU technology at the time demonstrated that it would be difficult for a dataflow machine to keep enough data near the CPU to be widely parallel, and it is precisely this bottleneck that modern fabrication techniques can solve by placing hundreds of CPU's and their memory on a single die.
Another reason that dataflow systems never became popular is that compilers of the era found it difficult to work with common imperative languages like C++. Instead, most dataflow systems used dedicated languages like Prograph, which limited their commercial interest. A decade of compiler research has eliminated many of these problems, and a key difference between dataflow and EDGE approaches is that EDGE designs intend to work with commonly used languages.
=== CPUs ===
An EDGE-based CPU would consist of one or more small block engines with their own local registers; realistic designs might have hundreds of these units. The units are interconnected to each other using dedicated inter-block communication links. Due to the information encoded into the block by the compiler, the scheduler can examine an entire block to see if its inputs are available and send it into an engine for execution – there is no need to examine the individual instructions within.
With a small increase in complexity, the scheduler can examine multiple blocks to see if the outputs of one are fed in as the inputs of another, and place these blocks on units that reduce their inter-unit communications delays. If a modern CPU examines a thousand instructions for potential parallelism, the same complexity in EDGE allows it to examine a thousand hyperblocks, each one consisting of hundreds of instructions. This gives the scheduler considerably better scope for no additional cost. It is this pattern of operation that gives the concept its name; the "graph" is the string of blocks connected by the data flowing between them.
Another advantage of the EDGE concept is that it is massively scalable. A low-end design could consist of a single block engine with a stub scheduler that simply sends in blocks as they are called by the program. An EDGE processor intended for desktop use would instead include hundreds of block engines. Critically, all that changes between these designs is the physical layout of the chip and private information that is known only by the scheduler; a program written for the single-unit machine would run without any changes on the desktop version, albeit thousands of times faster. Power scaling is likewise dramatically improved and simplified; block engines can be turned on or off as required with a linear effect on power consumption.
Perhaps the greatest advantage to the EDGE concept is that it is suitable for running any sort of data load. Unlike modern CPU designs where different portions of the CPU are dedicated to different sorts of data, an EDGE CPU would normally consist of a single type of ALU-like unit. A desktop user running several different programs at the same time would get just as much parallelism as a scientific user feeding in a single program using floating point only; in both cases the scheduler would simply load every block it could into the units. At a low level the performance of the individual block engines would not match that of a dedicated FPU, for instance, but it would attempt to overwhelm any such advantage through massive parallelism.
== Implementations ==
=== TRIPS ===
The University of Texas at Austin was developing an EDGE ISA known as TRIPS. In order to simplify the microarchitecture of a CPU designed to run it, the TRIPS ISA imposes several well-defined constraints on each TRIPS hyperblock, they:
have at most 128 instructions,
issue at most 32 loads and/or stores,
issue at most 32 register bank reads and/or writes,
have one branch decision, used to indicate the end of a block.
The TRIPS compiler statically bundles instructions into hyperblocks, but also statically compiles these blocks to run on particular ALUs. This means that TRIPS programs have some dependency on the precise implementation they are compiled for.
In 2003 they produced a sample TRIPS prototype with sixteen block engines in a 4 by 4 grid, along with a megabyte of local cache and transfer memory. A single chip version of TRIPS, fabbed by IBM in Canada using a 130 nm process, contains two such "grid engines" along with shared level-2 cache and various support systems. Four such chips and a gigabyte of RAM are placed together on a daughter-card for experimentation.
The TRIPS team had set an ultimate goal of producing a single-chip implementation capable of running at a sustained performance of 1 TFLOPS, about 50 times the performance of high-end commodity CPUs available in 2008 (the dual-core Xeon 5160 provides about 17 GFLOPS).
=== CASH ===
CMU's CASH is a compiler that produces an intermediate code called "Pegasus". CASH and TRIPS are very similar in concept, but CASH is not targeted to produce output for a specific architecture, and therefore has no hard limits on the block layout.
=== WaveScalar ===
The University of Washington's WaveScalar architecture is substantially similar to EDGE, but does not statically place instructions within its "waves". Instead, special instructions (phi, and rho) mark the boundaries of the waves and allow scheduling.
== References ==
=== Citations ===
=== Bibliography ===
University of Texas at Austin, "TRIPS Technical Overview"
A. Smith et al., "Compiling for EDGE Architectures", 2006 International Conference on Code Generation and Optimization, March, 2006 | Wikipedia/Explicit_data_graph_execution |
Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. It deals with the relationship between binary inputs and outputs by passing electrical signals through logical gates, resistors, capacitors, amplifiers, and other electrical components. The field of digital electronics is in contrast to analog electronics which work primarily with analog signals (signals with varying degrees of intensity as opposed to on/off two state binary signals). Despite the name, digital electronics designs include important analog design considerations.
Large assemblies of logic gates, used to represent more complex ideas, are often packaged into integrated circuits. Complex devices may have simple electronic representations of Boolean logic functions.
== History ==
The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the invention of George Boole in the mid-19th century. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification of the Fleming valve in 1907 could be used as an AND gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the 1954 Nobel Prize in physics, for creating the first modern electronic AND gate in 1924.
Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed, with the term digital being proposed by George Stibitz in 1942. Originally they were the size of a large room, consuming as much power as several hundred modern PCs.
Claude Shannon, demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship, ultimately laid the foundations of digital computing and digital circuits in his master's thesis of 1937, which is considered to be arguably the most important master's thesis ever written, winning the 1939 Alfred Noble Prize.
The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it was the world's first working programmable, fully automatic digital computer. Its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming.
At the same time that digital calculation replaced analog, purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents. John Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes. Their "transistorised computer", and the first in the world, was operational by 1953, and a second version was completed there in April 1955. From 1955 and onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space.
In 1955, Carl Frosch and Lincoln Derick discovered silicon dioxide surface passivation effects. In 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide field effect transistors; the first planar transistors, in which drain and source were adjacent at the same surface. At Bell Labs, the importance of Frosch and Derick technique and transistors was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni, who would later invent the planar process in 1959 while at Fairchild Semiconductor. At Bell Labs, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2 stack and published their results in 1960. Following this research at Bell Labs, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. The team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device.
While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas concerning the integrated circuit (IC), then successfully demonstrated the first working integrated circuit on 12 September 1958. Kilby's chip was made of germanium. The following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The basis for Noyce's silicon IC was Hoerni's planar process.
The MOSFET's advantages include high scalability, affordability, low power consumption, and high transistor density. Its rapid on–off electronic switching speed also makes it ideal for generating pulse trains, the basis for electronic digital signals, in contrast to BJTs which, more slowly, generate analog signals resembling sine waves. Along with MOS large-scale integration (LSI), these factors make the MOSFET an important switching device for digital circuits. The MOSFET revolutionized the electronics industry, and is the most common semiconductor device.
In the early days of integrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. The wide adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip. Following the wide adoption of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed, and good designs required thorough planning, giving rise to new design methods. The transistor count of devices and total production rose to unprecedented heights. The total amount of transistors produced until 2018 has been estimated to be 1.3×1022 (13 sextillion).
The wireless revolution (the introduction and proliferation of wireless networks) began in the 1990s and was enabled by the wide adoption of MOSFET-based RF power amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS). Wireless networks allowed for public digital transmission without the need for cables, leading to digital television, satellite and digital radio, GPS, wireless Internet and mobile phones through the 1990s–2000s.
== Properties ==
An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused by noise. For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s.
In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easily scalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of the signal chain.
With computer-controlled digital systems, new functions can be added through software revision and no hardware changes are needed. Often this can be done outside of the factory by updating the product's software. This way, the product's design errors can be corrected even after the product is in a customer's hands.
Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use of redundancy permits the recovery of the original data provided too many errors do not occur.
In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit the use of digital systems. For example, battery-powered cellular phones often use a low-power analog front-end to amplify and tune the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can easily be reprogrammed to process the signals used in new cellular standards.
Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.
If a single piece of digital data is lost or misinterpreted, in some systems only a small error may result, while in other systems the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly as linear pulse-code modulation causes, at worst, a single audible click. But when using audio compression to save storage space and transmission time, a single bit error may cause a much larger disruption.
Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data.
== Construction ==
A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of Boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates.
Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices", though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small-volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software.
Integrated circuits consist of multiple transistors on one silicon chip and are the least expensive way to make a large number of interconnected logic gates. Integrated circuits are usually interconnected on a printed circuit board which is a board that holds electrical components, and connects them together with copper traces.
== Design ==
Engineers use many methods to minimize logic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. These operations are typically performed within a computer-aided design system.
Embedded systems with microcontrollers and programmable logic controllers are often used to implement digital logic for complex systems that do not require optimal performance. These systems are usually programmed by software engineers or by electricians, using ladder logic.
=== Representation ===
A digital circuit's input-output relationship can be represented as a truth table. An equivalent high-level circuit uses logic gates, each represented by a different shape (standardized by IEEE/ANSI 91–1984). A low-level representation uses an equivalent circuit of electronic switches (usually transistors).
Most digital systems divide into combinational and sequential systems. The output of a combinational system depends only on the present inputs. However, a sequential system has some of its outputs fed back as inputs, so its output may depend on past inputs in addition to present inputs, to produce a sequence of operations. Simplified representations of their behavior called state machines facilitate design and test.
Sequential systems divide into two further subcategories. "Synchronous" sequential systems change state all at once when a clock signal changes state. "Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made using flip flops that store inputted voltages as a bit only when the clock changes.
=== Synchronous systems ===
The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a state register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational logic.
=== Asynchronous systems ===
Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates.
Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters.
Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called "self-resynchronization"). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable—that is—real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components.
=== Register transfer systems ===
Many digital systems are data flow machines. These are usually designed using synchronous register transfer logic and written with hardware description languages such as VHDL or Verilog.
In register transfer logic, binary numbers are stored in groups of flip flops called registers. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called a bus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses.
Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs.
=== Computer design ===
The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unit, memory and other parts of the computer, including the microsequencer itself. In this way, the complex task of designing the controls of a computer is reduced to the simpler task of programming a collection of much simpler logic machines.
Almost all computers are synchronous. However, asynchronous computers have also been built. One example is the ASPIDA DLX core. Another was offered by ARM Holdings. They do not, however, have any speed advantages because modern computer designs already run at the speed of their slowest component, usually memory. They do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise. They are used in some radio-sensitive mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode.
=== Computer architecture ===
Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way possible for a specific purpose. Computer architects have put a lot of work into reducing the cost and increasing the speed of computers in addition to boosting their immunity to programming errors. An increasingly common goal of computer architects is to reduce the power used in battery-powered computer systems, such as smartphones.
=== Design issues in digital circuits ===
Digital circuits are made from analog components. The design must assure that the analog nature of the components does not dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances.
Bad designs have intermittent problems such as glitches, vanishingly fast pulses that may trigger some logic but not others, runt pulses that do not reach valid threshold voltages.
Additionally, where clocked digital systems interface to analog systems or systems that are driven from a different clock, the digital system can be subject to metastability where a change to the input violates the setup time for a digital input latch.
Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity.
=== Automated design tools ===
Much of the effort of designing large logic machines has been automated through the application of electronic design automation (EDA).
Simple truth table-style descriptions of logic are often optimized with EDA that automatically produce reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is the Espresso heuristic logic minimizer. Optimizing large logic systems may be done using the Quine–McCluskey algorithm or binary decision diagrams. There are promising experiments with genetic algorithms and annealing optimizations.
To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinational logic of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and their associated output signals.
Often, real logic systems are designed as a series of sub-projects, which are combined using a tool flow. The tool flow is usually controlled with the help of a scripting language, a simplified computer language that can invoke the software design tools in the right order. Tool flows for large logic systems such as microprocessors can be thousands of commands long, and combine the work of hundreds of engineers. Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions on how to draw the transistors and wires on an integrated circuit or a printed circuit board.
Parts of tool flows are debugged by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs and highlight discrepancies between the simulated behavior and the expected behavior. Once the input data is believed to be correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors.
The functional verification data are usually called test vectors. The functional test vectors may be preserved and used in the factory to test whether newly constructed logic works correctly. However, functional test patterns do not discover all fabrication faults. Production tests are often designed by automatic test pattern generation software tools. These generate test vectors by examining the structure of the logic and systematically generating tests targeting particular potential faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section).
Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Software that are designed for manufacturability add interference patterns to the exposure masks to eliminate open-circuits and enhance the masks' contrast.
=== Design for testability ===
There are several reasons for testing a logic circuit. When the circuit is first developed, it is necessary to verify that the design circuit meets the required functional, and timing specifications. When multiple copies of a correctly designed circuit are being manufactured, it is essential to test each copy to ensure that the manufacturing process has not introduced any flaws.
A large logic machine (say, with more than a hundred logical variables) can have an astronomical number of possible states. Obviously, factory testing every state of such a machine is unfeasible, for even if testing each state only took a microsecond, there are more possible states than there are microseconds since the universe began!
Large logic machines are almost always designed as assemblies of smaller logic machines. To save time, the smaller sub-machines are isolated by permanently installed design for test circuitry and are tested independently. One common testing scheme provides a test mode that forces some part of the logic machine to enter a test cycle. The test cycle usually exercises large independent parts of the machine.
Boundary scan is a common test scheme that uses serial communication with external test equipment through one or more shift registers known as scan chains. Serial scans have only one or two wires to carry the data, and minimize the physical size and expense of the infrequently used test logic. After all the test data bits are in place, the design is reconfigured to be in normal mode and one or more clock pulses are applied, to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-flops or latches in the scan shift register(s). Finally, the result of the test is shifted out to the block boundary and compared against the predicted good machine result.
In a board-test environment, serial-to-parallel testing has been formalized as the JTAG standard.
=== Trade-offs ===
==== Cost ====
Since a digital system may use many logic gates, the overall cost of building a computer correlates strongly with the cost of a logic gate. In the 1930s, the earliest digital logic systems were constructed from telephone relays because these were inexpensive and relatively reliable.
The earliest integrated circuits were constructed to save weight and permit the Apollo Guidance Computer to control an inertial guidance system for a spacecraft. The first integrated circuit logic gates cost nearly US$50, which in 2024 would be equivalent to $531. Mass-produced gates on integrated circuits became the least-expensive method to construct digital logic.
With the rise of integrated circuits, reducing the absolute number of chips used represented another way to save costs. The goal of a designer is not just to make the simplest circuit, but to keep the component count down. Sometimes this results in more complicated designs with respect to the underlying digital logic but nevertheless reduces the number of components, board size, and even power consumption.
==== Reliability ====
Another major motive for reducing component count on printed circuit boards is to reduce the manufacturing defect rate due to failed soldered connections and increase reliability. Defect and failure rates tend to increase along with the total number of component pins.
The failure of a single logic gate may cause a digital machine to fail. Where additional reliability is required, redundant logic can be provided. Redundancy adds cost and power consumption over a non-redundant system.
The reliability of a logic gate can be described by its mean time between failure (MTBF). Digital machines first became useful when the MTBF for a switch increased above a few hundred hours. Even so, many of these machines had complex, well-rehearsed repair procedures, and would be nonfunctional for hours because a tube burned-out, or a moth got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs greater than 82 billion hours (8.2×1010 h). This level of reliability is required because integrated circuits have so many logic gates.
==== Fan-out ====
Fan-out describes how many logic inputs can be controlled by a single logic output without exceeding the electrical current ratings of the gate outputs. The minimum practical fan-out is about five. Modern electronic logic gates using CMOS transistors for switches have higher fan-outs.
==== Speed ====
The switching speed describes how long it takes a logic output to change from true to false or vice versa. Faster logic can accomplish more operations in less time. Modern electronic digital logic routinely switches at 5 GHz, and some laboratory systems switch at more than 1 THz..
== Logic families ==
Digital design started with relay logic which is slow. Occasionally a mechanical failure would occur. Fan-outs were typically about 10, limited by the resistance of the coils and arcing on the contacts from high voltages.
Later, vacuum tubes were used. These were very fast, but generated heat, and were unreliable because the filaments would burn out. Fan-outs were typically 5 to 7, limited by the heating from the tubes' current. In the 1950s, special computer tubes were developed with filaments that omitted volatile elements like silicon. These ran for hundreds of thousands of hours.
The first semiconductor logic family was resistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-out of 3. Diode–transistor logic improved the fan-out up to about 7, and reduced the power. Some DTL designs used two power supplies with alternating layers of NPN and PNP transistors to increase the fan-out.
Transistor–transistor logic (TTL) was a great improvement over these. In early devices, fan-out improved to 10, and later variations reliably achieved 20. TTL was also fast, with some variations achieving switching times as low as 20 ns. TTL is still used in some designs.
Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers, such as the Illiac IV, made up of many medium-scale components.
By far, the most common digital integrated circuits built today use CMOS logic, which is fast, offers high circuit density and low power per gate. This is used even in large, fast computers, such as the IBM System z.
== Recent developments ==
In 2009, researchers discovered that memristors can implement a Boolean state storage and provides a complete logic family with very small amounts of space and power, using familiar CMOS semiconductor processes.
The discovery of superconductivity has enabled the development of rapid single flux quantum (RSFQ) circuit technology, which uses Josephson junctions instead of transistors. Most recently, attempts are being made to construct purely optical computing systems capable of processing digital information using nonlinear optical elements.
== See also ==
De Morgan's laws
Logical effort
Logic optimization
Microelectronics
Unconventional computing
== Notes ==
== References ==
== Further reading ==
Douglas Lewin, Logical Design of Switching Circuits, Nelson,1974.
R. H. Katz, Contemporary Logic Design, The Benjamin/Cummings Publishing Company, 1994.
P. K. Lala, Practical Digital Logic Design and Testing, Prentice Hall, 1996.
Y. K. Chan and S. Y. Lim, Progress In Electromagnetics Research B, Vol. 1, 269–290, 2008, "Synthetic Aperture Radar (SAR) Signal Generation, Faculty of Engineering & Technology, Multimedia University, Jalan Ayer Keroh Lama, Bukit Beruang, Melaka 75450, Malaysia.
== External links ==
Digital Circuit Projects: An Overview of Digital Circuits Through Implementing Integrated Circuits (2014)
Lessons in Electric Circuits - Volume IV (Digital) at the Wayback Machine (archived 2012-11-27)
MIT OpenCourseWare introduction to digital design class materials ("6.004: Computation Structures") | Wikipedia/Digital_integrated_circuit |
A differential is a gear train with three drive shafts that has the property that the rotational speed of one shaft is the average of the speeds of the others. A common use of differentials is in motor vehicles, to allow the wheels at each end of a drive axle to rotate at different speeds while cornering. Other uses include clocks and analogue computers.
Differentials can also provide a gear ratio between the input and output shafts (called the "axle ratio" or "diff ratio"). For example, many differentials in motor vehicles provide a gearing reduction by having fewer teeth on the pinion than the ring gear.
== History ==
Milestones in the design or use of differentials include:
100 BCE–70 BCE: The Antikythera mechanism has been dated to this period. It was discovered in 1902 on a shipwreck by sponge divers, and modern research suggests that it used a differential gear to determine the angle between the ecliptic positions of the Sun and Moon, and thus the phase of the Moon.
c. 250 CE: Chinese engineer Ma Jun creates the first well-documented south-pointing chariot, a precursor to the compass. Its mechanism of action is unclear, though some 20th century engineers put forward the argument that it used a differential gear.
1810: Rudolph Ackermann of Germany invents a four-wheel steering system for carriages, which some later writers mistakenly report as a differential.
1823: Aza Arnold develops a differential drive train for use in cotton-spinning. The design quickly spreads across the United States and into the United Kingdom.
1827: Modern automotive differential patented by watchmaker Onésiphore Pecqueur (1792–1852) of the Conservatoire National des Arts et Métiers in France for use on a steam wagon.
1874: Aveling and Porter of Rochester, Kent list a crane locomotive in their catalogue fitted with their patent differential gear on the rear axle.
1876: James Starley of Coventry invents chain-drive differential for use on bicycles; invention later used on automobiles by Karl Benz.
1897: While building his Australian steam car, David Shearer made the first use of a differential in a motor vehicle.
1958: Vernon Gleasman patents the Torsen limited-slip differential.
== Use in wheeled vehicles ==
=== Purpose ===
During cornering, the outer wheels of a vehicle must travel further than the inner wheels (since they are on a larger radius). This is easily accommodated when the wheels are not connected, however it becomes more difficult for the drive wheels, since both wheels are connected to the engine (usually via a transmission). Some vehicles (for example go-karts and trams) use axles without a differential, thus relying on wheel slip when cornering. However, for improved cornering abilities, many vehicles use a differential, which allows the two wheels to rotate at different speeds.
The purpose of a differential is to transfer the engine's power to the wheels while still allowing the wheels to rotate at different speeds when required. An illustration of the operating principle for a ring-and-pinion differential is shown below.
=== Ring-and-pinion design ===
A relatively simple design of differential is used in rear-wheel drive vehicles, whereby a ring gear is driven by a pinion gear connected to the transmission. The functions of this design are to change the axis of rotation by 90 degrees (from the propshaft to the half-shafts) and provide a reduction in the gear ratio.
The components of the ring-and-pinion differential shown in the schematic diagram on the right are: 1. Output shafts (axles) 2. Drive gear 3. Output gears 4. Planetary gears 5. Carrier 6. Input gear 7. Input shaft (driveshaft)
=== Epicyclic design ===
An epicyclic differential uses epicyclic gearing to send certain proportions of torque to the front axle and the rear axle in an all-wheel drive vehicle. An advantage of the epicyclic design is its relatively compact width (when viewed along the axis of its input shaft).
=== Spur-gear design ===
A spur-gear differential has equal-sized spur gears at each end, each of which is connected to an output shaft. The input torque (i.e. from the engine or transmission) is applied to the differential via the rotating carrier. Pinion pairs are located within the carrier and rotate freely on pins supported by the carrier. The pinion pairs only mesh for the part of their length between the two spur gears, and rotate in opposite directions. The remaining length of a given pinion meshes with the nearer spur gear on its axle. Each pinion connects the associated spur gear to the other spur gear (via the other pinion). As the carrier is rotated (by the input torque), the relationship between the speeds of the input (i.e. the carrier) and that of the output shafts is the same as other types of open differentials.
Uses of spur-gear differentials include the Oldsmobile Toronado American front-wheel drive car.
=== Locking differentials ===
Locking differentials have the ability to overcome the chief limitation of a standard open differential by essentially "locking" both wheels on an axle together as if on a common shaft. This forces both wheels to turn in unison, regardless of the traction (or lack thereof) available to either wheel individually. When this function is not required, the differential can be "unlocked" to function as a regular open differential.
Locking differentials are mostly used on off-road vehicles, to overcome low-grip and variable grip surfaces.
=== Limited-slip differentials ===
An undesirable side-effect of a regular ("open") differential is that it can send most of the power to the wheel with the lesser traction (grip). In situation when one wheel has reduced grip (e.g. due to cornering forces or a low-grip surface under one wheel), an open differential can cause wheelspin in the tyre with less grip, while the tyre with more grip receives very little power to propel the vehicle forward.
In order to avoid this situation, various designs of limited-slip differentials are used to limit the difference in power sent to each of the wheels.
=== Torque vectoring ===
Torque vectoring is a technology employed in automobile differentials that has the ability to vary the torque to each half-shaft with an electronic system; or in rail vehicles which achieve the same using individually motored wheels. In the case of automobiles, it is used to augment the stability or cornering ability of the vehicle.
== Other uses ==
Non-automotive uses of differentials include performing analogue arithmetic. Two of the differential's three shafts are made to rotate through angles that represent (are proportional to) two numbers, and the angle of the third shaft's rotation represents the sum or difference of the two input numbers. The earliest known use of a differential gear is in the Antikythera mechanism, c. 80 BCE, which used a differential gear to control a small sphere representing the Moon from the difference between the Sun and Moon position pointers. The ball was painted black and white in hemispheres, and graphically showed the phase of the Moon at a particular point in time. An equation clock that used a differential for addition was made in 1720. In the 20th century, large assemblies of many differentials were used as analogue computers, calculating, for example, the direction in which a gun should be aimed.
=== Compass-like devices ===
Chinese south-pointing chariots may also have been very early applications of differentials. The chariot had a pointer which constantly pointed to the south, no matter how the chariot turned as it travelled. It could therefore be used as a type of compass. It is widely thought that a differential mechanism responded to any difference between the speeds of rotation of the two wheels of the chariot, and turned the pointer appropriately. However, the mechanism was not precise enough, and, after a few miles of travel, the dial could be pointing in the wrong direction.
=== Clocks ===
The earliest verified use of a differential was in a clock made by Joseph Williamson in 1720. It employed a differential to add the equation of time to local mean time, as determined by the clock mechanism, to produce solar time, which would have been the same as the reading of a sundial. During the 18th century, sundials were considered to show the "correct" time, so an ordinary clock would frequently have to be readjusted, even if it worked perfectly, because of seasonal variations in the equation of time. Williamson's and other equation clocks showed sundial time without needing readjustment. Nowadays, we consider clocks to be "correct" and sundials usually incorrect, so many sundials carry instructions about how to use their readings to obtain clock time.
=== Analogue computers ===
Differential analysers, a type of mechanical analogue computer, were used from approximately 1900 to 1950. These devices used differential gear trains to perform addition and subtraction.
=== Vehicle suspension ===
The Mars rovers Spirit and Opportunity (both launched in 2004) used differential gears in their rocker-bogie suspensions to keep the rover body balanced as the wheels on the left and right move up and down over uneven terrain. The Curiosity and Perseverance rovers used a differential bar instead of gears to perform the same function.
== See also ==
Anti-lock braking system
Ball differential
Drifting (motorsport)
List of auto parts
Hermann Aron § Electricity meters
Traction control system
Whippletree
== References ==
== Further reading ==
Popular Science, May 1946, How Your Car Turns Corners, a large article with numerous illustrations on how differentials work
== External links ==
A video of a 3D model of an open differential | Wikipedia/Differential_(mechanical_device) |
Photographic film is a strip or sheet of transparent film base coated on one side with a gelatin emulsion containing microscopically small light-sensitive silver halide crystals. The sizes and other characteristics of the crystals determine the sensitivity, contrast, and resolution of the film. Film is typically segmented in frames, that give rise to separate photographs.
The emulsion will gradually darken if left exposed to light, but the process is too slow and incomplete to be of any practical use. Instead, a very short exposure to the image formed by a camera lens is used to produce only a very slight chemical change, proportional to the amount of light absorbed by each crystal. This creates an invisible latent image in the emulsion, which can be chemically developed into a visible photograph. In addition to visible light, all films are sensitive to ultraviolet light, X-rays, gamma rays, and high-energy particles. Unmodified silver halide crystals are sensitive only to the blue part of the visible spectrum, producing unnatural-looking renditions of some colored subjects. This problem was resolved with the discovery that certain dyes, called sensitizing dyes, when adsorbed onto the silver halide crystals made them respond to other colors as well. First orthochromatic (sensitive to blue and green) and finally panchromatic (sensitive to all visible colors) films were developed. Panchromatic film renders all colors in shades of gray approximately matching their subjective brightness. By similar techniques, special-purpose films can be made sensitive to the infrared (IR) region of the spectrum.
In black-and-white photographic film, there is usually one layer of silver halide crystals. When the exposed silver halide grains are developed, the silver halide crystals are converted to metallic silver, which blocks light and appears as the black part of the film negative. Color film has at least three sensitive layers, incorporating different combinations of sensitizing dyes. Typically the blue-sensitive layer is on top, followed by a yellow filter layer to stop any remaining blue light from affecting the layers below. Next comes a green-and-blue sensitive layer, and a red-and-blue sensitive layer, which record the green and red images respectively. During development, the exposed silver halide crystals are converted to metallic silver, just as with black-and-white film. But in a color film, the by-products of the development reaction simultaneously combine with chemicals known as color couplers that are included either in the film itself or in the developer solution to form colored dyes. Because the by-products are created in direct proportion to the amount of exposure and development, the dye clouds formed are also in proportion to the exposure and development. Following development, the silver is converted back to silver halide crystals in the bleach step. It is removed from the film during the process of fixing the image on the film with a solution of ammonium thiosulfate or sodium thiosulfate (hypo or fixer). Fixing leaves behind only the formed color dyes, which combine to make up the colored visible image. Later color films, like Kodacolor II, have as many as 12 emulsion layers, with upwards of 20 different chemicals in each layer.
Photographic film and film stock tend to be similar in composition and speed, but often not in other parameters such as frame size and length. Silver halide photographic paper is also similar to photographic film.
Before the emergence of digital photography, photographs on film had to be developed to produce negatives or projectable slides, and negatives had to be printed as positive images, usually in enlarged form. This was usually done by photographic laboratories, but many amateurs did their own processing.
== Characteristics of film ==
=== Film basics ===
There are several types of photographic film, including:
Print film, when developed, yields transparent negatives with the light and dark areas and colors (if color film is used) inverted to their respective complementary colors. This type of film is designed to be printed onto photographic paper, usually by means of an enlarger but in some cases by contact printing. The paper is then itself developed. The second inversion that results restores light, shade and color to their normal appearance. Color negatives incorporate an orange color correction mask that compensates for unwanted dye absorptions and improves color accuracy in the prints. Although color processing is more complex and temperature-sensitive than black-and-white processing, the wide availability of commercial color processing and scarcity of service for black-and-white prompted the design of some black-and-white films which are processed in exactly the same way as standard color film.
Color reversal film produces positive transparencies, also known as diapositives. Transparencies can be reviewed with the aid of a magnifying loupe and a lightbox. If mounted in small metal, plastic or cardboard frames for use in a slide projector or slide viewer they are commonly called slides. Reversal film is often marketed as "slide film". Large-format color reversal sheet film is used by some professional photographers, typically to originate very-high-resolution imagery for digital scanning into color separations for mass photomechanical reproduction. Photographic prints can be produced from reversal film transparencies, but positive-to-positive print materials for doing this directly (e.g. Ektachrome paper, Cibachrome/Ilfochrome) have all been discontinued, so it now requires the use of an internegative to convert the positive transparency image into a negative transparency, which is then printed as a positive print.
Black-and-white reversal film exists but is very uncommon. Conventional black-and-white negative film can be reversal-processed to produce black-and-white slides, as by dr5 Chrome. Although kits of chemicals for black-and-white reversal processing may no longer be available to amateur darkroom enthusiasts, an acid bleaching solution, the only unusual component which is essential, is easily prepared from scratch. Black-and-white transparencies may also be produced by printing negatives onto special positive print film, still available from some specialty photographic supply dealers.
In order to produce a usable image, the film needs to be exposed properly. The amount of exposure variation that a given film can tolerate, while still producing an acceptable level of quality, is called its exposure latitude. Color print film generally has greater exposure latitude than other types of film. Additionally, because print film must be printed to be viewed, after-the-fact corrections for imperfect exposure are possible during the printing process.
The concentration of dyes or silver halide crystals remaining on the film after development is referred to as optical density, or simply density; the optical density is proportional to the logarithm of the optical transmission coefficient of the developed film. A dark image on the negative is of higher density than a more transparent image.
Most films are affected by the physics of silver grain activation (which sets a minimum amount of light required to expose a single grain) and by the statistics of random grain activation by photons. The film requires a minimum amount of light before it begins to expose, and then responds by progressive darkening over a wide dynamic range of exposure until all of the grains are exposed, and the film achieves (after development) its maximum optical density.
Over the active dynamic range of most films, the density of the developed film is proportional to the logarithm of the total amount of light to which the film was exposed, so the transmission coefficient of the developed film is proportional to a power of the reciprocal of the brightness of the original exposure. The plot of the density of the film image against the log of the exposure is known as an H&D curve. This effect is due to the statistics of grain activation: as the film becomes progressively more exposed, each incident photon is less likely to impact a still-unexposed grain, yielding the logarithmic behavior. A simple, idealized statistical model yields the equation density = 1 – ( 1 – k) light, where light is proportional to the number of photons hitting a unit area of film, k is the probability of a single photon striking a grain (based on the size of the grains and how closely spaced they are), and density is the proportion of grains that have been hit by at least one photon. The relationship between density and log exposure is linear for photographic films except at the extreme ranges of maximum exposure (D-max) and minimum exposure (D-min) on an H&D curve, so the curve is characteristically S-shaped (as opposed to digital camera sensors which have a linear response through the effective exposure range). The sensitivity (i.e., the ISO speed) of a film can be affected by changing the length or temperature of development, which would move the H&D curve to the left or right (see figure).
If parts of the image are exposed heavily enough to approach the maximum density possible for a print film, then they will begin losing the ability to show tonal variations in the final print. Usually those areas will be considered overexposed and will appear as featureless white on the print. Some subject matter is tolerant of very heavy exposure. For example, sources of brilliant light, such as a light bulb or the sun, generally appear best as a featureless white on the print.
Likewise, if part of an image receives less than the beginning threshold level of exposure, which depends upon the film's sensitivity to light – or speed – the film there will have no appreciable image density, and will appear on the print as a featureless black. Some photographers use their knowledge of these limits to determine the optimum exposure for a photograph; for one example, see the Zone System. Most automatic cameras instead try to achieve a particular average density.
Color films can have many layers. The film base can have an antihalation layer applied to it or be dyed. This layer prevents light from reflecting from within the film, increasing image quality. This also can make films exposable on only one side, as it prevents exposure from behind the film. This layer is bleached after development to make it clear, thus making the film transparent. The antihalation layer, besides having a black colloidal silver sol pigment for absorbing light, can also have two UV absorbents to improve lightfastness of the developed image, an oxidized developer scavenger, dyes for compensating for optical density during printing, solvents, gelatin and disodium salt of 3,5-
disulfocatechol. If applied to the back of the film, it also serves to prevent scratching, as an antistatic measure due to its conductive carbon content, and as a lubricant to help transport the film through mechanisms. The antistatic property is necessary to prevent the film from getting fogged under low humidity, and mechanisms to avoid static are present in most if not all films. If applied on the back it is removed during film processing. If applied it may be on the back of the film base in triacetate film bases or in the front in PET film bases, below the emulsion stack. An anticurl layer and a separate antistatic layer may be present in thin high resolution films that have the antihalation layer below the emulsion. PET film bases are often dyed, specially because PET can serve as a light pipe; black and white film bases tend to have a higher level of dying applied to them. The film base needs to be transparent but with some density, perfectly flat, insensitive to light, chemically stable, resistant to tearing and strong enough to be handled manually and by camera mechanisms and film processing equipment, while being chemically resistant to moisture and the chemicals used during processing without losing strength, flexibility or changing in size.
The subbing layer is essentially an adhesive that allows the subsequent layers to stick to the film base. The film base was initially made of highly flammable cellulose nitrate, which was replaced by cellulose acetate films, often cellulose triacetate film (safety film), which in turn was replaced in many films (such as all print films, most duplication films and some other specialty films) by a PET (polyethylene terephthalate) plastic film base. Films with a triacetate base can suffer from vinegar syndrome, a decomposition process accelerated by warm and humid conditions, that releases acetic acid which is the characteristic component of vinegar, imparting the film a strong vinegar smell, accelerating damage within the film and possibly even damaging surrounding metal and films. Films are usually spliced using a special adhesive tape; those with PET layers can be ultrasonically spliced or their ends melted and then spliced.
The emulsion layers of films are made by dissolving pure silver in nitric acid to form silver nitrate crystals, which are mixed with other chemicals to form silver halide grains, which are then suspended in gelatin and applied to the film base. The size and hence the light sensitivity of these grains determines the speed of the film; since films contain real silver (as silver halide), faster films with larger crystals are more expensive and potentially subject to variations in the price of silver metal. Also, faster films have more grain, since the grains (crystals) are larger. Each crystal is often 0.2 to 2 microns in size; in color films, the dye clouds that form around the silver halide crystals are often 25 microns across. The crystals can be shaped as cubes, flat rectangles, tetradecadedra, or be flat and resemble a triangle with or without clipped edges; this type of crystal is known as a T-grain crystal or a tabular grain (T-grains). Films using T-grains are more sensitive to light without using more silver halide since they increase the surface area exposed to light by making the crystals flatter and larger in footprint instead of simply increasing their volume.
T-grains can also have a hexagonal shape. These grains also have reduced sensitivity to blue light which is an advantage since silver halide is most sensitive to blue light than other colors of light. This was traditionally solved by the addition of a blue-blocking filter layer in the film emulsion, but T-grains have allowed this layer to be removed. Also the grains may have a "core" and "shell" where the core, made of silver iodobromide, has higher iodine content than the shell, which improves light sensitivity, these grains are known as Σ-Grains.
The exact silver halide used is either silver bromide or silver bromochloroiodide, or a combination of silver bromide, chloride and iodide. Silver iodobromide may be used as a silver halide.
Silver halide crystals can be made in several shapes for use in photographic films. For example, AgBrCl hexagonal tabular grains can be used for color negative films, AgBr octahedral grains can be used for instant color photography films, AgBrl cubo-octahedral grains can be used for color reversal films, AgBr hexagonal tabular grains can be used for medical X-ray films, and AgBrCl cubic grains can be used for graphic arts films.
In color films, each emulsion layer has silver halide crystals that are sensitized to one particular color (wavelength of light) via sentizing dyes, to that they will be made sensitive to only one color of light, and not to others, since silver halide particles are intrinsically sensitive only to wavelengths below 450 nm (which is blue light). The sensitizing dyes are absorbed at dislocations in the silver halide particles in the emulsion on the film. The sensitizing dyes may be supersensitized with a supersensitizing dye, that assists the function of the sensitizing dye and improves the efficiency of photon capture by silver halide. Each layer has a different type of color dye forming coupler: in the blue sensitive layer, the coupler forms a yellow dye; in the green sensitive layer the coupler forms a magenta dye, and in the red sensitive layer the coupler forms a cyan dye. Color films often have an UV blocking layer. Each emulsion layer in a color film may itself have three layers: a slow, medium and fast layer, to allow the film to capture higher contrast images. The color dye couplers are inside oil droplets dispersed in the emulsion around silver halide crystals, forming a silver halide grain. Here the oil droplets act as a surfactant, also protecting the couplers from chemical reactions with the silver halide and from the surrounding gelatin. During development, oxidized developer diffuses into the oil droplets and combines with the dye couplers to form dye clouds; the dye clouds only form around exposed silver halide crystals. The fixer then removes the silver halide crystals leaving only the dye clouds: this means that developed color films may not contain silver while undeveloped films do contain silver; this also means that the fixer can start to contain silver which can then be removed through electrolysis. Color films also contain light filters to filter out certain colors as the light passes through the film: often there is a blue light filter between the blue and green sensitive layers and a yellow filter before the red sensitive layer; in this way each layer is made sensitive to only a certain color of light.
The couplers need to be made resistant to diffusion (non-diffusible) so that they will not move between the layers of the film and thus cause incorrect color rendition as the couplers are specific to either cyan, magenta or yellow colors. This is done by making couplers with a ballast group such as a lipophilic group (oil-protected) and applying them in oil droplets to the film, or a hydrophilic group, or in a polymer layer such as a loadable latex layer with oil-protected couplers, in which case they are considered to be polymer-protected.
The color couplers may be colorless and be chromogenic or be colored. Colored couplers are used to improve the color reproduction of film. The first coupler which is used in the blue layer remains colorless to allow all light to pass through, but the coupler used in the green layer is colored yellow, and the coupler used in the red layer is light pink. Yellow was chosen to block any remaining blue light from exposing the underlying green and red layers (since yellow can be made from green and red). Each layer should only be sensitive to a single color of light and allow all others to pass through. Because of these colored couplers, the developed film appears orange. Colored couplers mean that corrections through color filters need to be applied to the image before printing. Printing can be carried out by using an optical enlarger, or by scanning the image, correcting it using software and printing it using a digital printer.
Kodachrome films have no couplers; the dyes are instead formed by a long sequence of steps, limiting adoption among smaller film processing companies.
Black and white films are very simple by comparison, only consisting of silver halide crystals suspended in a gelatin emulsion which sits on a film base with an antihalation back.
Many films contain a top supercoat layer to protect the emulsion layers from damage. Some manufacturers manufacture their films with daylight, tungsten (named after the tungsten filament of incandescent and halogen lamps) or fluorescent lighting in mind, recommending the use of lens filters, light meters and test shots in some situations to maintain color balance, or by recommending the division of the ISO value of the film by the distance of the subject from the camera to get an appropriate f-number value to be set in the lens.
Examples of Color films are Kodachrome, often processed using the K-14 process, Kodacolor, Ektachrome, which is often processed using the E-6 process and Fujifilm Superia, which is processed using the C-41 process. The chemicals and the color dye couplers on the film may vary depending on the process used to develop the film.
=== Film speed ===
Film speed describes a film's threshold sensitivity to light. The international standard for rating film speed is the ISO scale, which combines both the ASA speed and the DIN speed in the format ASA/DIN. Using ISO convention film with an ASA speed of 400 would be labeled 400/27°. A fourth naming standard is GOST, developed by the Russian standards authority. See the film speed article for a table of conversions between ASA, DIN, and GOST film speeds.
Common film speeds include ISO 25, 50, 64, 100, 160, 200, 400, 800, 1600 and 3200. Consumer print films are usually in the ISO 100 to ISO 800 range. Some films, like Kodak's Technical Pan, are not ISO rated and therefore careful examination of the film's properties must be made by the photographer before exposure and development. ISO 25 film is very "slow", as it requires much more exposure to produce a usable image than "fast" ISO 800 film. Films of ISO 800 and greater are thus better suited to low-light situations and action shots (where the short exposure time limits the total light received). The benefit of slower film is that it usually has finer grain and better color rendition than fast film. Professional photographers of static subjects such as portraits or landscapes usually seek these qualities, and therefore require a tripod to stabilize the camera for a longer exposure. A professional photographing subjects such as rapidly moving sports or in low-light conditions will inevitably choose a faster film.
A film with a particular ISO rating can be push-processed, or "pushed", to behave like a film with a higher ISO, by developing for a longer amount of time or at a higher temperature than usual.: 160 More rarely, a film can be "pulled" to behave like a "slower" film. Pushing generally coarsens grain and increases contrast, reducing dynamic range, to the detriment of overall quality. Nevertheless, it can be a useful tradeoff in difficult shooting environments, if the alternative is no usable shot at all.
=== Special films ===
Instant photography, as popularized by Polaroid, uses a special type of camera and film that automates and integrates development, without the need of further equipment or chemicals. This process is carried out immediately after exposure, as opposed to regular film, which is developed afterwards and requires additional chemicals. See instant film.
Films can be made to record non-visible ultraviolet (UV) and infrared (IR) radiation. These films generally require special equipment; for example, most photographic lenses are made of glass and will therefore filter out most ultraviolet light. Instead, expensive lenses made of quartz must be used. Infrared films may be shot in standard cameras using an infrared band- or long-pass filters, although the infrared focal point must be compensated for.
Exposure and focusing are difficult when using UV or IR film with a camera and lens designed for visible light. The ISO standard for film speed only applies to visible light, so visual-spectrum light meters are nearly useless. Film manufacturers can supply suggested equivalent film speeds under different conditions, and recommend heavy bracketing (e.g., "with a certain filter, assume ISO 25 under daylight and ISO 64 under tungsten lighting"). This allows a light meter to be used to estimate an exposure. The focal point for IR is slightly farther away from the camera than visible light, and UV slightly closer; this must be compensated for when focusing. Apochromatic lenses are sometimes recommended due to their improved focusing across the spectrum.
Film optimized for detecting X-ray radiation is commonly used for medical radiography and industrial radiography by placing the subject between the film and a source of X-rays or gamma rays, without a lens, as if a translucent object were imaged by being placed between a light source and standard film. Unlike other types of film, X-ray film has a sensitive emulsion on both sides of the carrier material. This reduces the X-ray exposure for an acceptable image – a desirable feature in medical radiography. The film is usually placed in close contact with phosphor screen(s) and/or thin lead-foil screen(s), the combination having a higher sensitivity to X-rays. Because film is sensitive to x-rays, its contents may be wiped by airport baggage scanners if the film has a speed higher than 800 ISO. This property is exploited in Film badge dosimeters.
Film optimized for detecting X-rays and gamma rays is sometimes used for radiation dosimetry.
Film has a number of disadvantages as a scientific detector: it is difficult to calibrate for photometry, it is not re-usable, it requires careful handling (including temperature and humidity control) for best calibration, and the film must physically be returned to the laboratory and processed. Against this, photographic film can be made with a higher spatial resolution than any other type of imaging detector, and, because of its logarithmic response to light, has a wider dynamic range than most digital detectors. For example, Agfa 10E56 holographic film has a resolution of over 4,000 lines/mm – equivalent to a pixel size of 0.125 micrometers – and an active dynamic range of over five orders of magnitude in brightness, compared to typical scientific CCDs that might have pixels of about 10 micrometers and a dynamic range of 3–4 orders of magnitude.
Special films are used for the long exposures required by astrophotography.
Lith films used in the printing industry. In particular when exposed via a ruled-glass screen or contact-screen, halftone images suitable for printing could be generated.
=== Encoding of metadata ===
Some film cameras have the ability to read metadata from the film canister or encode metadata on film negatives.
==== Negative imprinting ====
Negative imprinting is a feature of some film cameras, in which the date, shutter speed and aperture setting are recorded on the negative directly as the film is exposed. The first known version of this process was patented in the United States in 1975, using half-silvered mirrors to direct the readout of a digital clock and mix it with the light rays coming through the main camera lens. Modern SLR cameras use an imprinter fixed to the back of the camera on the film backing plate. It uses a small LED display for illumination and optics to focus the light onto a specific part of the film. The LED display is exposed on the negative at the same time the picture is taken. Digital cameras can often encode all the information in the image file itself. The Exif format is the most commonly used format.
==== DX codes ====
In the 1980s, Kodak developed DX Encoding (from Digital indeX), or DX coding, a feature that was eventually adapted by all camera and film manufacturers. DX encoding provides information on both the film cassette and on the film regarding the type of film, number of exposures, speed (ISO/ASA rating) of the film. It consists of three types of identification. First is a barcode near the film opening of the cassette, identifying the manufacturer, film type and processing method (see image below left). This is used by photofinishing equipment during film processing. The second part is a barcode on the edge of the film (see image below right), used also during processing, which indicates the image film type, manufacturer, frame number and synchronizes the position of the frame. The third part of DX coding, known as the DX Camera Auto Sensing (CAS) code, consists of a series of 12 metal contacts on the film cassette, which beginning with cameras manufactured after 1985 could detect the type of film, number of exposures and ISO of the film, and use that information to automatically adjust the camera settings for the speed of the film.
=== Common sizes of film ===
Source:
== History ==
The earliest practical photographic process was the daguerreotype; it was introduced in 1839 and did not use film. The light-sensitive chemicals were formed on the surface of a silver-plated copper sheet. The calotype process produced paper negatives. Beginning in the 1850s, thin glass plates coated with photographic emulsion became the standard material for use in the camera. Although fragile and relatively heavy, the glass used for photographic plates was of better optical quality than early transparent plastics and was, at first, less expensive. Glass plates continued to be used long after the introduction of film, and were used for astrophotography and electron micrography until the early 2000s, when they were supplanted by digital recording methods. Ilford continues to manufacture glass plates for special scientific applications.
The first flexible photographic roll film was sold by George Eastman in 1885, but this original "film" was actually a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and attached to a sheet of hardened clear gelatin. The first transparent plastic roll film followed in 1889. It was made from highly flammable cellulose nitrate film.
Although cellulose acetate or "safety film" had been introduced by Kodak in 1908, at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being considerably tougher, slightly more transparent, and cheaper. The changeover was completed for X-ray films in 1933, but although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm films until it was finally discontinued in 1951.
Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876. Their work enabled the first quantitative measure of film speed to be devised. They developed H&D curves, which are specific for each film and paper. These curves plot the photographic density against the log of the exposure, to determine sensitivity or speed of the emulsion and enabling correct exposure.
=== Spectral sensitivity ===
Early photographic plates and films were usefully sensitive only to blue, violet and ultraviolet light. As a result, the relative tonal values in a scene registered roughly as they would appear if viewed through a piece of deep blue glass. Blue skies with interesting cloud formations photographed as a white blank. Any detail visible in masses of green foliage was due mainly to the colorless surface gloss. Bright yellows and reds appeared nearly black. Most skin tones came out unnaturally dark, and uneven or freckled complexions were exaggerated. Photographers sometimes compensated by adding in skies from separate negatives that had been exposed and processed to optimize the visibility of the clouds, by manually retouching their negatives to adjust problematic tonal values, and by heavily powdering the faces of their portrait sitters.
In 1873, Hermann Wilhelm Vogel discovered that the spectral sensitivity could be extended to green and yellow light by adding very small quantities of certain dyes to the emulsion. The instability of early sensitizing dyes and their tendency to rapidly cause fogging initially confined their use to the laboratory, but in 1883 the first commercially dye-sensitized plates appeared on the market. These early products, described as isochromatic or orthochromatic depending on the manufacturer, made possible a more accurate rendering of colored subject matter into a black-and-white image. Because they were still disproportionately sensitive to blue, the use of a yellow filter and a consequently longer exposure time were required to take full advantage of their extended sensitivity.
In 1894, the Lumière Brothers introduced their Lumière Panchromatic plate, which was made sensitive, although very unequally, to all colors including red. New and improved sensitizing dyes were developed, and in 1902 the much more evenly color-sensitive Perchromo panchromatic plate was being sold by the German manufacturer Perutz. The commercial availability of highly panchromatic black-and-white emulsions also accelerated the progress of practical color photography, which requires good sensitivity to all the colors of the spectrum for the red, green and blue channels of color information to all be captured with reasonable exposure times.
However, all of these were glass-based plate products. Panchromatic emulsions on a film base were not commercially available until the 1910s and did not come into general use until much later. Many photographers who did their own darkroom work preferred to go without the seeming luxury of sensitivity to red – a rare color in nature and uncommon even in human-made objects – rather than be forced to abandon the traditional red darkroom safelight and process their exposed film in complete darkness. Kodak's popular Verichrome black-and-white snapshot film, introduced in 1931, remained a red-insensitive orthochromatic product until 1956, when it was replaced by Verichrome Pan. Amateur darkroom enthusiasts then had to handle the undeveloped film by the sense of touch alone.
=== Introduction to color ===
Experiments with color photography began almost as early as photography itself, but the three-color principle underlying all practical processes was not set forth until 1855, not demonstrated until 1861, and not generally accepted as "real" color photography until it had become an undeniable commercial reality in the early 20th century. Although color photographs of good quality were being made by the 1890s, they required special equipment, separate and long exposures through three color filters, complex printing or display procedures, and highly specialized skills, so they were then exceedingly rare.
The first practical and commercially successful color "film" was the Lumière Autochrome, a glass plate product introduced in 1907. It was expensive and not sensitive enough for hand-held "snapshot" use. Film-based versions were introduced in the early 1930s and the sensitivity was later improved. These were "mosaic screen" additive color products, which used a simple layer of black-and-white emulsion in combination with a layer of microscopically small color filter elements. The resulting transparencies or "slides" were very dark because the color filter mosaic layer absorbed most of the light passing through. The last films of this type were discontinued in the 1950s, but Polachrome "instant" slide film, introduced in 1983, temporarily revived the technology.
"Color film" in the modern sense of a subtractive color product with a multi-layered emulsion was born with the introduction of Kodachrome for home movies in 1935 and as lengths of 35 mm film for still cameras in 1936; however, it required a complex development process, with multiple dyeing steps as each color layer was processed separately. 1936 also saw the launch of Agfa Color Neu, the first subtractive three-color reversal film for movie and still camera use to incorporate color dye couplers, which could be processed at the same time by a single color developer. The film had some 278 patents. The incorporation of color couplers formed the basis of subsequent color film design, with the Agfa process initially adopted by Ferrania, Fuji and Konica and lasting until the late 70s/early 1980s in the West and 1990s in Eastern Europe. The process used dye-forming chemicals that terminated with sulfonic acid groups and had to be coated one layer at a time. It was a further innovation by Kodak, using dye-forming chemicals which terminated in 'fatty' tails which permitted multiple layers to coated at the same time in a single pass, reducing production time and cost that later became universally adopted along with the Kodak C-41 process.
Despite greater availability of color film after WWII during the next several decades, it remained much more expensive than black-and-white and required much more light, factors which combined with the greater cost of processing and printing delayed its widespread adoption. Decreasing cost, increasing sensitivity and standardized processing gradually overcame these impediments. By the 1970s, color film predominated in the consumer market, while the use of black-and-white film was increasingly confined to photojournalism and fine art photography.
=== Effect on lens and equipment design ===
Photographic lenses and equipment are designed around the film to be used. Although the earliest photographic materials were sensitive only to the blue-violet end of the spectrum, partially color-corrected achromatic lenses were normally used, so that when the photographer brought the visually brightest yellow rays to a sharp focus, the visually dimmest but photographically most active violet rays would be correctly focused, too. The introduction of orthochromatic emulsions required the whole range of colors from yellow to blue to be brought to an adequate focus. Most plates and films described as orthochromatic or isochromatic were practically insensitive to red, so the correct focus of red light was unimportant; a red window could be used to view the frame numbers on the paper backing of roll film, as any red light which leaked around the backing would not fog the film; and red lighting could be used in darkrooms. With the introduction of panchromatic film, the whole visible spectrum needed to be brought to an acceptably sharp focus. In all cases a color cast in the lens glass or faint colored reflections in the image were of no consequence as they would merely change the contrast a little. This was no longer acceptable when using color film. More highly corrected lenses for newer emulsions could be used with older emulsion types, but the converse was not true.
The progression of lens design for later emulsions is of practical importance when considering the use of old lenses, still often used on large-format equipment; a lens designed for orthochromatic film may have visible defects with a color emulsion; a lens for panchromatic film will be better but not as good as later designs.
The filters used were different for the different film types.
=== Decline ===
Film remained the dominant form of photography until the early 21st century, when advances in digital photography drew consumers to digital formats. The first consumer electronic camera, the Sony Mavica was released in 1981, the first digital camera, the Fuji DS-X released in 1989, coupled with advances in software such as Adobe Photoshop which was released in 1989, improvements in consumer level digital color printers and increasingly widespread computers in households during the late 20th century facilitated uptake of digital photography by consumers.
The initial take up of digital cameras in the 1990s was slow due to their high cost and relatively low resolution of the images (compared to 35mm film), but began to make inroads among consumers in the point and shoot market and in professional applications such as sports photography where speed of results including the ability to upload pictures direct from stadia was more critical for newspaper deadlines than resolution. A key difference compared to film was that early digital cameras were soon obsolete, forcing users into a frequent cycle of replacement until the technology began to mature, whereas previously people might have only owned one or two film cameras in their lifetime. Consequently, photographers demanding higher quality in sectors such as weddings, portraiture and fashion where medium format film predominated were the last to switch once resolution began to reach acceptable levels with the advent of 'full frame' sensors, 'digital backs' and medium format digital cameras.
Film camera sales based on CIPA figures peaked in 1998, before declining rapidly after 2000 to reach almost zero by the end of 2005 as consumers switched en masse to digital cameras (sales of which subsequently peaked in 2010). These changes foretold a similar reduction in film sales. Figures for Fujifilm show global film sales, having grown 30% in the preceding five years, peaked around the year 2000. Film sales then began a period of year-on-year falling sales, of increasing magnitude from 2003 to 2008, reaching 30% per annum before slowing. By 2011, sales were less than 10% of the peak volumes. Similar patterns were experienced by other manufacturers, varying by market exposure, with global film sales estimated at 200 million rolls in 1999 declining to only 5 million rolls by 2009. This period wreaked havoc on the film manufacturing industry and its supply chain optimised for high production volumes, plummeting sales saw firms fighting for survival. Agfa-Gevaert's decision to sell off its consumer facing arm (Agfaphoto) in 2004, was followed by a series of bankruptcies of established film manufacturers: Ilford Imaging UK in 2004, Agfaphoto in 2005, Forte in 2007, Foton in 2007, Polaroid in 2001 and 2008, Ferrania in 2009, and Eastman Kodak in 2012 (the latter only surviving after massive downsizing whilst Ilford was rescued by a management buyout). Konica-Minolta closed its film manufacturing business and exited the photographic market entirely in 2006, selling its camera patents to Sony, and Fujifilm successfully moved to rapidly diversify into other markets. The impact of this paradigm shift in technology subsequently rippled though the downstream photo processing and finishing businesses.
Although modern photography is dominated by digital users, film continues to be used by enthusiasts. Film remains the preference of some photographers because of its distinctive "look".
=== Renewed interest in recent years ===
Despite the fact that digital cameras are by far the most commonly-used photographic tool and that the selection of available photographic films is much smaller than it once was, sales of photographic film have been on a steady upward trend. Kodak (which was under bankruptcy protection from January 2012 to September 2013) and other companies have noticed this upward trend: Dennis Olbrich, President of the Imaging Paper, Photo Chemicals and Film division at Kodak Alaris, has stated that sales of their photographic films have been growing over the past three or four years. UK-based Ilford have confirmed this trend and conducted extensive research on this subject matter, their research showing that 60% of current film users had only started using film in the past five years and that 30% of current film users were under 35 years old. Annual film sales, which were estimated to reach a low of 5 million rolls in 2009, have since doubled to around 10 million rolls in 2019. A key challenge for the industry is that production relies on the remaining coating facilities that were built for the peak years of demand, but as demand has grown capacity constraints in some of the other process steps which have been downscaled, such as converting film, have caused production bottlenecks for companies such as Kodak.
In 2013 Ferrania, an Italy-based film manufacturer which ceased production of photographic films between the years 2009 and 2010, was acquired by the new Film Ferrania S.R.L taking over a small part of the old company's manufacturing facilities using its former research facility, and re-employed some workers who had been laid off three years earlier when the company stopped production of film.
In November of the same year, the company started a crowdfunding campaign with the goal of raising $250,000 to buy tooling and machines from the old factory, with the intention of putting some of the films that had been discontinued back into production, the campaign succeeded and in October 2014 was ended with over $320,000 being raised. In February 2017, Film Ferrania unveiled their "P30" 80 ASA, Panchromatic black and white film, in 35mm format.
Kodak announced on January 5, 2017, that Ektachrome, one of Kodak's most well known transparency films that had been discontinued between 2012 and 2013, would be reformulated and manufactured once again, in 35 mm still and Super 8 motion picture film formats. Following the success of the release, Kodak expanded Ektachrome's format availability by also releasing the film in 120 and 4x5 formats.
Japan-based Fujifilm's instant film "Instax" cameras and paper have also proven to be very successful, and have replaced traditional photographic films as Fujifilm's main film products, while they continue to offer traditional photographic films in various formats and types.
=== Reusable film ===
In 2023, a Finnish chemist Sami Vuori invented a reusable film that uses synthetic hackmanite (Na8Al6Si6O24(Cl,S)2) as the photosensitive medium. The film contains small hackmanite particles that color purple with exposure to ultraviolet radiation (e.g. 254 nm), after which the film is loaded into the camera. Visible light bleaches the hackmanite particles back to white, which gives rise to the formation of a positive image. The film can then be scanned with a typical document scanner and then colored again with UV. If the user wants to spare the image, the film can be put into a dark place, as the bleaching process stops completely in the absence of light.
On top of reusability and not needing any developing or chemicals, another advantage with this type of photochromic film is that it does not need gelatin, which renders it a vegan alternative. However, the main disadvantage is still the film's very slow exposure, requiring hours of exposure time. This means that currently this type of film can be used only in ultra-long-exposure film photography where the subject is e.g. a city center where the photographer wants to fade all movement.
Another reusable film invented by Liou et al. is based on 9-methylacridinium-intercalated clay particles, but erasing the image requires dipping the material in sulfuric acid.
== Image gallery ==
== See also ==
Videotape
Fogging (photography)
List of photographic equipment makers
List of photographic films
Oversampled binary image sensor
Photrio (formerly APUG)
Tungsten film
== Explanatory notes ==
== References ==
== Bibliography ==
Jacobson, Ralph E. (2000). The Focal Manual of Photography: Photographic and Digital Imaging (9th ed.). Boston: Focal Press. ISBN 978-0-240-51574-8.
Mees, Kenneth; James, T.H. (1966). Theory of the Photographic Process. Collier Macmillan Ltd. ISBN 978-0023601903.
== External links ==
Kosmo Foto article on future of film | Wikipedia/Photographic_film |
In integrated circuits (ICs), interconnects are structures that connect two or more circuit elements (such as transistors) together electrically. The design and layout of interconnects on an IC is vital to its proper function, performance, power efficiency, reliability, and fabrication yield. The material interconnects are made from depends on many factors. Chemical and mechanical compatibility with the semiconductor substrate and the dielectric between the levels of interconnect is necessary, otherwise barrier layers are needed. Suitability for fabrication is also required; some chemistries and processes prevent the integration of materials and unit processes into a larger technology (recipe) for IC fabrication. In fabrication, interconnects are formed during the back-end-of-line after the fabrication of the transistors on the substrate.
Interconnects are classified as local or global interconnects depending on the signal propagation distance it is able to support. The width and thickness of the interconnect, as well as the material from which it is made, are some of the significant factors that determine the distance a signal may propagate. Local interconnects connect circuit elements that are very close together, such as transistors separated by ten or so other contiguously laid out transistors. Global interconnects can transmit further, such as over large-area sub-circuits. Consequently, local interconnects may be formed from materials with relatively high electrical resistivity such as polycrystalline silicon (sometimes silicided to extend its range) or tungsten. To extend the distance an interconnect may reach, various circuits such as buffers or restorers may be inserted at various points along a long interconnect.
== Interconnect properties ==
The geometric properties of an interconnect are width, thickness, spacing (the distance between an interconnect and another on the same level), pitch (the sum of the width and spacing), and aspect ratio, or AR, (the thickness divided by width). The width, spacing, AR, and ultimately, pitch, are constrained in their minimum and maximum values by design rules that ensure the interconnect (and thus the IC) can be fabricated by the selected technology with a reasonable yield. Width is constrained to ensure minimum width interconnects do not suffer breaks, and maximum width interconnects can be planarized by chemical mechanical polishing (CMP). Spacing is constrained to ensure adjacent interconnects can be fabricated without any conductive material bridging. Thickness is determined solely by the technology, and the aspect ratio, by the chosen width and set thickness. In technologies that support multiple levels of interconnects, each group of contiguous levels, or each level, has its own set of design rules.
Before the introduction of CMP for planarizing IC layers, interconnects had design rules that specified larger minimum widths and spaces than the lower level to ensure that the underlying layer's rough topology did not cause breaks in the interconnect formed on top. The introduction of CMP has made finer geometries possible.
The AR is an important factor. In technologies that form interconnect structures with conventional processes, the AR is limited to ensure that the etch creating the interconnect, and the dielectric deposition that fills the voids in between interconnects with dielectric, can be done successfully. In those that form interconnect structures with damascene processes, the AR must permit successful etch of the trenches, deposition of the barrier metal (if needed) and interconnect material.
Interconnect layout are further restrained by design rules that apply to collections of interconnects. For a given area, technologies that rely on CMP have density rules to ensure the whole IC has an acceptable variation in interconnect density. This is because the rate at which CMP removes material depends on the material's properties, and great variations in interconnect density can result in large areas of dielectric which can dish, resulting in poor planarity. To maintain acceptable density, dummy interconnects (or dummy wires) are inserted into regions with spare interconnect density.
Historically, interconnects were routed in straight lines, and could change direction by using sections aligned 45° away from the direction of travel. As IC structure geometries became smaller, to obtain acceptable yields, restrictions were imposed on interconnect direction. Initially, only global interconnects were subject to restrictions; were made to run in straight lines aligned east–west or north–south. To allow easy routing, alternate levels of interconnect ran in the same alignment, so that changes in direction were achieved by connecting to a lower or upper level of interconnect though a via. Local interconnects, especially the lowest level (usually polysilicon) could assume a more arbitrary combination of routing options to attain the a higher packing density.
== Materials ==
In silicon ICs, the most commonly used semiconductor in ICs, the first interconnects were made of aluminum. Aluminum was an ideal material for interconnects due to its ease of deposition and good adherence to silicon and silicon dioxide. Al interconnects are deposited by physical vapor deposition or chemical vapor deposition methods. They were originally patterned by wet etching, and later by various dry etching techniques.
Initially, pure aluminum was used but by the 1970s, substrate compatibility, junction spiking and reliability concerns (mostly concerning electromigration) forced the use of aluminum-based alloys containing silicon, copper, or both. By the late 1990s, the high resistivity of aluminum, coupled with the narrow widths of the interconnect structures forced by continuous feature size downscaling, resulted in prohibitively high resistance in interconnect structures. This forced aluminum's replacement by copper interconnects.
In gallium arsenide (GaAs) ICs, which have been mainly used in application domains (e.g. monolithic microwave ICs) different to those of silicon, the predominant material used for interconnects is gold.
== Performance enhancements ==
To reduce the delay penalty caused by parasitic capacitance, the dielectric material used to insulate adjacent interconnects, and interconnects on different levels (the inter-level dielectric [ILD]), should have a dielectric constant that is as close to 1 as possible. A class of such materials, Low-κ dielectrics, were introduced during the late 1990s and early 2000s for this purpose. As of January 2019, the most advanced materials reduce the dielectric constant to very low levels through highly porous structures, or through the creation of substantial air or vacuum pockets (air gap dielectric). These materials often have low mechanical strength and are restricted to the lowest level or levels of interconnect as a result. The high density of interconnects at the lower levels, along with the minimal spacing, helps support the upper layers. Intel introduced air-gap dielectric in its 14 nm technology in 2014.
== Multi-level interconnects ==
IC with complex circuits require multiple levels of interconnect to form circuits that have minimal area. As of 2018, the most complex ICs may have over 15 layers of interconnect. Each level of interconnect is separated from each other by a layer of dielectric. To make vertical connections between interconnects on different levels, vias are used. The top-most layers of a chip have the thickest and widest and most widely separated metal layers, which make the wires on those layers have the least resistance and smallest RC time constant, so they are used for power and clock distribution networks. The bottom-most metal layers of the chip, closest to the transistors, have thin, narrow, tightly-packed wires, used only for local interconnect. Adding layers can potentially improve performance, but adding layers also reduces yield and increases manufacturing costs. ICs with a single metal layer typically use the polysilicon layer to "jump across" when one signal needs to cross another signal.
The process used to form DRAM capacitors creates a rough and hilly surface, which makes it difficult to add metal interconnect layers and still maintain good yield.
In 1998, state-of-the-art DRAM processes had four metal layers, while state-of-the-art logic processes had seven metal layers.
In 2002, five or six layers of metal interconnect was common.
In 2009, 1 Gbit DRAM typically had three layers of metal interconnect; tungsten for the first layer and aluminum for the upper layers.
== See also ==
Antenna effect
Bonding pad
Carbon nanotubes in interconnects
Interconnect bottleneck
Optical interconnect
Parasitic extraction
== References == | Wikipedia/Interconnects_(integrated_circuits) |
The control of fire by early humans was a critical technology enabling the evolution of humans. Fire provided a source of warmth and lighting, protection from predators (especially at night), a way to create more advanced hunting tools, and a method for cooking food. These cultural advances allowed human geographic dispersal, cultural innovations, and changes to diet and behavior. Additionally, creating fire allowed human activity to continue into the dark and colder hours of the evening.
Claims for the earliest definitive evidence of control of fire by a member of Homo range from 1.7 to 2.0 million years ago (Mya). Evidence for the "microscopic traces of wood ash" as controlled use of fire by Homo erectus, beginning roughly 1 million years ago, has wide scholarly support. Some of the earliest known traces of controlled fire were found at the Daughters of Jacob Bridge, Israel, and dated to ~790,000 years ago. At the site, archaeologists also found the oldest likely evidence (mainly, fish teeth that had been heated deep in a cave) for the controlled use of fire to cook food ~780,000 years ago. However, some studies suggest cooking started ~1.8 million years ago.
Flint blades burned in fires roughly 300,000 years ago were found near fossils of early but not entirely modern Homo sapiens in Morocco. Fire was used regularly and systematically by early modern humans to heat treat silcrete stone to increase its flake-ability for the purpose of toolmaking approximately 164,000 years ago at the South African site of Pinnacle Point. Evidence of widespread control of fire by anatomically modern humans dates to approximately 125,000 years ago.
== Control of fire ==
The use and control of fire was a gradual process proceeding through more than one stage. One was a change in habitat, from dense forest, where wildfires were rare but difficult to escape, to savanna (mixed grass/woodland) where wildfires were common but easier to survive. Such a change may have occurred about 3 million years ago, when the savanna expanded in East Africa due to cooler and drier climate.
The next stage involved interaction with burned landscapes and foraging in the wake of wildfires, as observed in various wild animals. In the African savanna, animals that preferentially forage in recently burned areas include savanna chimpanzees (a variety of Pan troglodytes verus), vervet monkeys (Cercopithecus aethiops) and a variety of birds, some of which also hunt insects and small vertebrates in the wake of grass fires.
The next step would be to make some use of residual hot spots that occur in the wake of wildfires. For example, foods found in the wake of wildfires tend to be either burned or undercooked. This might have provided incentives to place undercooked foods on a hotspot or to pull food out of the fire if it was in danger of getting burned. This would require familiarity with fire and its behavior.
An early step in the control of fire would have been transporting it from burned to unburned areas and lighting them on fire, providing advantages in food acquisition. Maintaining a fire over an extended period of time, as for a season (such as the dry season), may have led to the development of base campsites. Building a hearth or other fire enclosure such as a circle of stones would have been a later development. The ability to make fire, generally with a friction device with hardwood rubbing against softwood (as in a bow drill), was a later development.
Each of these stages could occur at different intensities, ranging from occasional or "opportunistic" to "habitual" to "obligate" (unable to survive without it).
== Lower Paleolithic evidence ==
Most of the evidence of controlled use of fire during the Lower Paleolithic is uncertain and has limited scholarly support. Some of the evidence is inconclusive because other plausible explanations, such as natural processes, exist for the findings. Findings support that the earliest known controlled use of fire took place in Wonderwerk Cave, South Africa, 1.0 Mya.
=== Africa ===
Findings from Wonderwerk provide the earliest evidence for controlled use of fire. Intact sediments were analyzed using micromorphological analysis. Fourier transform infrared microspectroscopy (mFTIR) yielded evidence, in the form of burned bones and ashed plant remains, that burning took place at the site 1.0 Mya.
East African sites, such as Chesowanja near Lake Baringo, Koobi Fora, and Olorgesailie in Kenya, show possible evidence that fire was controlled by early humans. In Chesowanja, archaeologists found red clay clasts dated to 1.4 Mya. These clasts must have been heated to 400 °C (750 °F) to harden. However, tree stumps burned in bush fires in East Africa produce clasts, which, when broken by erosion, are like those described at Chesownja. Controlled use of fire at Chesowanja is unproven.
In Koobi Fora, sites show evidence of control of fire by Homo erectus at 1.5 Mya with findings of reddened sediment that could come from heating at 200–400 °C (400–750 °F). Evidence of possible human control of fire, found at Swartkrans, South Africa, includes burned bones, including ones with hominin-inflicted cut marks, along with Acheulean and bone tools. This site shows some of the earliest evidence of carnivorous behavior in H. erectus. A "hearth-like depression" that could have been used to burn bones was found in Olorgesailie, Kenya. However, it did not contain any charcoal, and no signs of fire have been observed. Some microscopic charcoal was found, but it could have resulted from a natural brush fire.
In Gadeb, Ethiopia, fragments of welded tuff that appeared to have been burned were found in Locality 8E but refiring of the rocks might have occurred due to local volcanic activity.
In the Middle Awash River Valley, cone-shaped depressions of reddish clay were found that could have been formed by temperatures of 200 °C (400 °F). These features, thought to have been created by burning tree stumps, were hypothesized to have been produced by early hominids lighting tree stumps so they could have fire away from their habitation site. This view is not widely accepted, though. Burned stones were found in Awash Valley, but volcanic welded tuff is found in the area, which could explain the burned stones.
Burned flints discovered near Jebel Irhoud, Morocco, dated by thermoluminescence to around 300,000 years old, were discovered in the same sedimentary layer as skulls of early Homo sapiens. Paleoanthropologist Jean-Jacques Hublin believes the flints were used as spear tips and left in fires used by the early humans for cooking food.
=== Asia ===
In Xihoudu in Shanxi Province, China, the black, blue, and grayish-green discoloration of mammalian bones found at the site illustrates evidence of burning by early hominids. In 1985, at a parallel site in China, Yuanmou in Yunnan Province, archaeologists found blackened mammal bones that date back to 1.7 Mya.
==== Middle East ====
A site at Bnot Ya'akov Bridge, Israel, has been claimed to show that H. erectus or H. ergaster controlled fires between 790,000 and 690,000 BP. An AI-powered spectroscopy helped researchers unearth evidence of the use of fire dating 800,000 and 1 million years ago. In an article published in June 2022, researchers from Weizmann Institute of Science, along with researchers at the University of Toronto and Hebrew University of Jerusalem described the use of deep learning models to analyze heat exposure of 26 flint tools that were found in 1970s at the Evron Quarry in the northwest of Israel. The results showed the tools were heated up to 600°C.
==== Southeast Asia ====
At Trinil, Java, burned wood has been found in layers that carried H. erectus (Java Man) fossils dating from 830,000 to 500,000 BP. The burned wood has been claimed to indicate the use of fire by early hominids.
== Middle Paleolithic evidence ==
=== Africa ===
The Cave of Hearths in South Africa has burn deposits, which date from 700,000 to 200,000 BP, as do various other sites such as Montagu Cave (200,000 to 58,000 BP) and the Klasies River Mouth (130,000 to 120,000 BP).
Strong evidence comes from Kalambo Falls in Zambia, where several artifacts related to the use of fire by humans have been recovered, including charred logs, charcoal, carbonized grass stems and plants, and wooden implements, which may have been hardened by fire. The site has been dated through radiocarbon dating to 180,000 BP, through amino-acid racemization.
Fire was used for heat treatment of silcrete stones to increase their workability before they were knapped into tools by Stillbay culture in South Africa. These Stillbay sites date back from 164,000 to 72,000 years ago, with the heat treatment of stone beginning by about 164,000 years ago.
=== Asia ===
Evidence at Zhoukoudian cave in China suggests control of fire as early as 460,000 to 230,000 BP. Fire in Zhoukoudian is suggested by the presence of burned bones, burned chipped-stone artifacts, charcoal, ash, and hearths alongside H. erectus fossils in Layer 10, the earliest archaeological horizon at the site. This evidence comes from Locality 1, also known as the Peking Man site, where several bones were found to be uniformly black to grey. The bone extracts were determined to be characteristic of burned bone rather than manganese staining. These residues also showed IR spectra for oxides, and a turquoise bone was reproduced in the laboratory by heating some of the other bones found in Layer 10. The same effect might have been at the site due to natural heating, as the effect was produced on white, yellow, and black bones.
Layer 10 is ash with biologically produced silicon, aluminum, iron, and potassium, but wood ash remnants such as siliceous aggregates are missing. Among these are possible hearths "represented by finely laminated silt and clay interbedded with reddish-brown and yellow-brown fragments of organic matter, locally mixed with limestone fragments and dark brown finely laminated silt, clay, and organic matter." The site itself does not show that fires were made in Zhoukoudian, but the association of blackened bones with quartzite artifacts at least shows that humans did control fire at the time of the habitation of the Zhoukoudian cave.
==== Middle East ====
At the Amudian site of Qesem Cave, near the city of Kfar Qasim, Israel, evidence exists of the regular use of fire from before 382,000 BP to around 200,000 BP, at the end of Lower Pleistocene. Large quantities of burned bone and moderately heated soil lumps were found, and the cut marks found on the bones suggest that butchering and prey-defleshing took place near fireplaces. In addition, hominins living in Qesem cave managed to heat their flint to varying temperatures before knapping it into different tools.
==== Indian Subcontinent ====
The earliest evidence for controlled fire use by humans on the Indian subcontinent, dating to between 50,000 and 55,000 years ago, comes from the Main Belan archaeological site, located in the Belan River valley in Uttar Pradesh, India.
=== Europe ===
Multiple sites in Europe, such as Torralba and Ambrona, Spain, and St. Esteve-Janson, France, have also shown evidence of the use of fire by later versions of H. erectus. The oldest has been found in England at the site of Beeches Pit, Suffolk; uranium series dating and thermoluminescence dating place the use of fire at 415,000 BP. At Vértesszőlős, Hungary, while no charcoal has been found, burned bones have been discovered dating from c. 350,000 years ago. At Torralba and Ambrona, Spain, objects such as Acheulean stone tools, remains of large mammals such as extinct elephants, charcoal, and wood were discovered. At Terra Amata in France, there is a fireplace with ashes (dated between 380,000 BP and 230,000 BP). At Saint-Estève-Janson in France, there is evidence of five hearths and reddened earth in the Escale Cave; these hearths have been dated to 200,000 BP. Evidence for fire making dates to at least the Middle Paleolithic, with dozens of Neanderthal hand axes from France exhibiting use-wear traces suggesting these tools were struck with the mineral pyrite to produce sparks around 50,000 years ago.
== Impact on human evolution ==
=== Cultural innovation ===
==== Uses of fire by early humans ====
The discovery of fire provided various uses for early hominids. Its warmth kept them alive during low nighttime temperatures in colder environments, allowing geographic expansion from tropical and subtropical climates to temperate areas. Its blaze warded off predatory animals, especially in the dark.
Fire also played a major role in changing food habits. Cooking allowed a significant increase in meat consumption and calorie intake. It was soon discovered that meat could be dried and smoked by fire, preserving it for lean seasons. Fire was even used in manufacturing tools for hunting and butchering. Hominids also learned that starting bushfires to burn large areas could increase land fertility and clear terrain to make hunting easier. Evidence shows that early hominids were able to corral and trap prey animals using fire. Fire was used to clear out caves before living in them, helping to begin the use of shelter. The many uses of fire may have led to specialized social roles, such as the separation of cooking from hunting.
The control of fire enabled important changes in human behavior, health, energy expenditure, and geographic expansion. After the loss of body hair, hominids could move into much colder regions that would have previously been uninhabitable. Evidence of more complex management to change biomes can be found as far back as 200,000 to 100,000 years ago, at minimum.
==== Tool and weapon making ====
Fire allowed major innovations in tool and weapon manufacture. Evidence dating to roughly 164,000 years ago indicates that early humans in South Africa during the Middle Stone Age used fire to alter the mechanical properties of tool materials applying heat treatment to a fine-grained rock called silcrete. The heated rocks were then tempered into crescent-shaped blades or arrowheads for hunting and butchering prey. This may have been the first time that bow and arrow were used for hunting, with far-ranging impact.
==== Art and ceramics ====
Fire was used in the creation of art. Archaeologists have discovered several 1- to 10-inch Venus figurine statues in Europe dating to the Paleolithic, several carved from stone and ivory, others shaped from clay and then fired. These are some of the earliest examples of ceramics. Fire was also commonly used to create pottery. Although pottery was formerly thought to have begun with the Neolithic around 10,000 years ago, scientists in China discovered pottery fragments in the Xianrendong Cave that were about 20,000 years old. During the Neolithic Age and agricultural revolution about 10,000 years ago, pottery became far more common and widespread, often carved and painted with simple linear designs and geometric shapes.
==== Social development and nighttime activity ====
Fire was an important factor in expanding and developing societies of early hominids. One impact fire might have had was social stratification. The power to make and wield fire may have conferred prestige and social position. Fire also led to a lengthening of daytime activities and allowed more nighttime activities. Evidence of large hearths indicate that the majority of nighttime was spent around the fire. The increased social interaction from gathering around the fire may have fostered the development of language.
Another effect of fire use on hominid societies was that it required larger groups to work together to maintain the fire, finding fuel, portioning it onto the fire, and re-igniting it when necessary. These larger groups might have included older individuals, such as grandparents, who helped to care for children. Ultimately, fire significantly influenced the size and social interactions of early hominid communities.
Exposure to artificial light during later hours of the day changed humans' circadian rhythms, contributing to a longer waking day. The modern human's waking day is 16 hours, while many mammals are only awake for half as many hours. Additionally, humans are most awake during the early evening hours, while other primates' days begin at dawn and end at sundown. Many of these behavioral changes can be attributed to the control of fire and its impact on daylight extension.
=== The cooking hypothesis ===
The cooking hypothesis proposes that the ability to cook allowed for the brain size of hominids to increase over time. This idea was first presented by Friedrich Engels in the article "The Part Played by Labour in the Transition from Ape to Man" and later recapitulated in the book Catching Fire: How Cooking Made Us Human by Richard Wrangham and then in a book by Suzana Herculano-Houzel. Critics of the hypothesis argue that cooking with controlled fire was insufficient to start the increasing brain size trend.
The cooking hypothesis gains support by comparing the nutrients in raw food to the much more easily digested nutrients in cooked food, as in an examination of protein ingestion from raw vs. cooked egg. Scientists have found that among several primates, the restriction of feeding to raw foods during daylight hours limits the metabolic energy available. Genus Homo was able to break through the limit by cooking food to shorten their feeding times and be able to absorb more nutrients to accommodate the increasing need for energy. In addition, scientists argue that the Homo species was also able to obtain nutrients like docosahexaenoic acid from algae that were especially beneficial and critical for brain evolution. The detoxification of food by the cooking process enabled early humans to access these resources.
Besides the brain, other human organs also demand a high metabolism. During human evolution, the body-mass proportion of different organs changed to allow brain expansion.
==== Changes to diet ====
Before the advent of fire, the hominid diet was limited to mostly plant parts composed of simple sugars and carbohydrates such as seeds, flowers, and fleshy fruits. Parts of the plant, such as stems, mature leaves, enlarged roots, and tubers, would have been inaccessible as a food source due to the indigestibility of raw cellulose and starch. Cooking, however, made starchy and fibrous foods edible and greatly increased the diversity of other foods available to early humans. Toxin-containing foods, including seeds and similar carbohydrate sources, such as cyanogenic glycosides found in linseed and cassava, were incorporated into their diets as cooking rendered them nontoxic.
Cooking could also kill parasites, reduce the amount of energy required for chewing and digestion, and release more nutrients from plants and meat. Due to the difficulty of chewing raw meat and digesting tough proteins (e.g. collagen) and carbohydrates, the development of cooking served as an effective mechanism to process meat efficiently and allow for its consumption in larger quantities. With its high caloric density and content of important nutrients, meat thus became a staple in the diet of early humans. By increasing digestibility, cooking allowed hominids to maximize the energy gained from consuming foods. Studies show that caloric intake from cooking starches improves 12–35%, and 45–78% for protein. As a result of the increases in net energy gain from food consumption, survival and reproductive rates in hominids increased. Through lowering food toxicity and increasing nutritive yield, cooking allowed for an earlier weaning age, permitting females to have more children. In this way, too, it facilitated population growth.
It has been proposed that the use of fire for cooking caused environmental toxins to accumulate in the placenta, which led to a species-wide taboo on human placentophagy around the time of the mastery of fire. Placentophagy is common in other primates.
==== Biological changes ====
Before their use of fire, the hominid species had large premolars, which were used to chew harder foods, such as large seeds. In addition, due to the shape of the molar cusps, the diet is inferred to have been more leaf- or fruit-based. Probably in response to consuming cooked foods, the molar teeth of H. erectus gradually shrank, suggesting that their diet had changed from more challenging foods such as crisp root vegetables to softer cooked foods such as meat. Cooked foods further selected for the differentiation of their teeth and eventually led to a decreased jaw volume with a variety of smaller teeth in hominids. Today, a smaller jaw volume and teeth size of humans is seen in comparison to other primates.
Due to the increased digestibility of many cooked foods, less digestion was needed to procure the necessary nutrients. As a result, the gastrointestinal tract and organs in the digestive system decreased in size. This is in contrast to other primates, where a larger digestive tract is needed for the fermentation of long carbohydrate chains. Thus, humans evolved from the large colons and tracts that are seen in other primates to smaller ones.
According to Wrangham, fire control allowed hominids to sleep on the ground and in caves instead of trees and led to more time spent on the ground. This may have contributed to the evolution of bipedalism, as such an ability became increasingly necessary for human activity.
==== Criticism ====
Critics of the hypothesis argue that while a linear increase in brain volume of the genus Homo is seen over time, adding fire control and cooking does not add anything meaningful to the data. Species such as H. ergaster existed with large brain volumes during periods with little to no evidence of fire for cooking. Little variation exists in the brain sizes of H. erectus dated from periods of weak and strong evidence for cooking. An experiment involving mice fed raw versus cooked meat found that cooking meat did not increase the amount of calories taken up by mice, leading to the study's conclusion that the energetic gain is the same, if not greater, in raw meat diets than cooked meats. Studies such as this and others have led to criticisms of the hypothesis that state that the increases in human brain size occurred well before the advent of cooking due to a shift away from the consumption of nuts and berries to the consumption of meat. Other anthropologists argue that the evidence suggests that cooking fires began in earnest only 250,000 BP, when ancient hearths, earth ovens, burned animal bones, and flint appear across Europe and the Middle East.
== See also ==
Hunting hypothesis
Savannah hypothesis
Theft of fire
== References ==
== External links ==
"How our pact with fire made us what we are" Archived 6 September 2015 at the Wayback Machine—Article by Stephen J Pyne
Human Timeline (Interactive) – National Museum of Natural History, Smithsonian (August 2016). | Wikipedia/Control_of_fire |
The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor. A CU typically uses a binary decoder to convert coded instructions into timing and control signals that direct the operation of the other units (memory, arithmetic logic unit and input and output devices, etc.).
Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction.
== Multicycle control units ==
The simplest computers use a multicycle microarchitecture. These were the earliest designs. They are still popular in the very smallest computers, such as the embedded systems that operate machinery.
In a computer, the control unit often steps through the instruction cycle successively. This consists of fetching the instruction, fetching the operands, decoding the instruction, executing the instruction, and then writing the results back to memory. When the next instruction is placed in the control unit, it changes the behavior of the control unit to complete the instruction correctly. So, the bits of the instruction directly control the control unit, which in turn controls the computer.
The control unit may include a binary counter to tell the control unit's logic what step it should do.
Multicycle control units typically use both the rising and falling edges of their square-wave timing clock. They operate a step of their operation on each edge of the timing clock, so that a four-step operation completes in two clock cycles. This doubles the speed of the computer, given the same logic family.
Many computers have two different types of unexpected events. An interrupt occurs because some type of input or output needs software attention in order to operate correctly. An exception is caused by the computer's operation. One crucial difference is that the timing of an interrupt cannot be predicted. Another is that some exceptions (e.g. a memory-not-available exception) can be caused by an instruction that needs to be restarted.
Control units can be designed to handle interrupts in one of two typical ways. If a quick response is most important, a control unit is designed to abandon work to handle the interrupt. In this case, the work in process will be restarted after the last completed instruction. If the computer is to be very inexpensive, very simple, very reliable, or to get more work done, the control unit will finish the work in process before handling the interrupt. Finishing the work is inexpensive, because it needs no register to record the last finished instruction. It is simple and reliable because it has the fewest states. It also wastes the least amount of work.
Exceptions can be made to operate like interrupts in very simple computers. If virtual memory is required, then a memory-not-available exception must retry the failing instruction.
It is common for multicycle computers to use more cycles. Sometimes it takes longer to take a conditional jump, because the program counter has to be reloaded. Sometimes they do multiplication or division instructions by a process, something like binary long multiplication and division. Very small computers might do arithmetic, one or a few bits at a time. Some other computers have very complex instructions that take many steps.
== Pipelined control units ==
Many medium-complexity computers pipeline instructions. This design is popular because of its economy and speed.
In a pipelined computer, instructions flow through the computer. This design has several stages. For example, it might have one stage for each step of the Von Neumann cycle. A pipelined computer usually has "pipeline registers" after each stage. These store the bits calculated by a stage so that the logic gates of the next stage can use the bits to do the next step.
It is common for even numbered stages to operate on one edge of the square-wave clock, while odd-numbered stages operate on the other edge. This speeds the computer by a factor of two compared to single-edge designs.
In a pipelined computer, the control unit arranges for the flow to start, continue, and stop as a program commands. The instruction data is usually passed in pipeline registers from one stage to the next, with a somewhat separated piece of control logic for each stage. The control unit also assures that the instruction in each stage does not harm the operation of instructions in other stages. For example, if two stages must use the same piece of data, the control logic assures that the uses are done in the correct sequence.
When operating efficiently, a pipelined computer will have an instruction in each stage. It is then working on all of those instructions at the same time. It can finish about one instruction for each cycle of its clock. When a program makes a decision, and switches to a different sequence of instructions, the pipeline sometimes must discard the data in process and restart. This is called a "stall." When two instructions could interfere, sometimes the control unit must stop processing a later instruction until an earlier instruction completes. This is called a "pipeline bubble" because a part of the pipeline is not processing instructions. Pipeline bubbles can occur when two instructions operate on the same register.
Interrupts and unexpected exceptions also stall the pipeline. If a pipelined computer abandons work for an interrupt, more work is lost than in a multicycle computer. Predictable exceptions do not need to stall. For example, if an exception instruction is used to enter the operating system, it does not cause a stall.
For the same speed of electronic logic, a pipelined computer can execute more instructions per second than a multicycle computer. Also, even though the electronic logic has a fixed maximum speed, a pipelined computer can be made faster or slower by varying the number of stages in the pipeline. With more stages, each stage does less work, and so the stage has fewer delays from the logic gates.
A pipelined model of a computer often has less logic gates per instruction per second than multicycle and out-of-order computers. This is because the average stage is less complex than a multicycle computer. An out-of-order computer usually has large amounts of idle logic at any given instant. Similar calculations usually show that a pipelined computer uses less energy per instruction.
However, a pipelined computer is usually more complex and more costly than a comparable multicycle computer. It typically has more logic gates, registers and a more complex control unit. In a like way, it might use more total energy, while using less energy per instruction. Out-of-order CPUs can usually do more instructions per second because they can do several instructions at once.
== Preventing stalls ==
Control units use many methods to keep a pipeline full and avoid stalls. For example, even simple control units can assume that a backwards branch, to a lower-numbered, earlier instruction, is a loop, and will be repeated. So, a control unit with this design will always fill the pipeline with the backwards branch path. If a compiler can detect the most frequently-taken direction of a branch, the compiler can just produce instructions so that the most frequently taken branch is the preferred direction of branch. In a like way, a control unit might get hints from the compiler: Some computers have instructions that can encode hints from the compiler about the direction of branch.
Some control units do branch prediction: A control unit keeps an electronic list of the recent branches, encoded by the address of the branch instruction. This list has a few bits for each branch to remember the direction that was taken most recently.
Some control units can do speculative execution, in which a computer might have two or more pipelines, calculate both directions of a branch, and then discard the calculations of the unused direction.
Results from memory can become available at unpredictable times because very fast computers cache memory. That is, they copy limited amounts of memory data into very fast memory. The CPU must be designed to process at the very fast speed of the cache memory. Therefore, the CPU might stall when it must access main memory directly. In modern PCs, main memory is as much as three hundred times slower than cache.
To help this, out-of-order CPUs and control units were developed to process data as it becomes available. (See next section)
But what if all the calculations are complete, but the CPU is still stalled, waiting for main memory? Then, a control unit can switch to an alternative thread of execution whose data has been fetched while the thread was idle. A thread has its own program counter, a stream of instructions and a separate set of registers. Designers vary the number of threads depending on current memory technologies and the type of computer. Typical computers such as PCs and smart phones usually have control units with a few threads, just enough to keep busy with affordable memory systems. Database computers often have about twice as many threads, to keep their much larger memories busy. Graphic processing units (GPUs) usually have hundreds or thousands of threads, because they have hundreds or thousands of execution units doing repetitive graphic calculations.
When a control unit permits threads, the software also has to be designed to handle them. In general-purpose CPUs like PCs and smartphones, the threads are usually made to look very like normal time-sliced processes. At most, the operating system might need some awareness of them. In GPUs, the thread scheduling usually cannot be hidden from the application software, and is often controlled with a specialized subroutine library.
== Out of order control units ==
A control unit can be designed to finish what it can. If several instructions can be completed at the same time, the control unit will arrange it. So, the fastest computers can process instructions in a sequence that can vary somewhat, depending on when the operands or instruction destinations become available. Most supercomputers and many PC CPUs use this method. The exact organization of this type of control unit depends on the slowest part of the computer.
When the execution of calculations is the slowest, instructions flow from memory into pieces of electronics called "issue units." An issue unit holds an instruction until both its operands and an execution unit are available. Then, the instruction and its operands are "issued" to an execution unit. The execution unit does the instruction. Then the resulting data is moved into a queue of data to be written back to memory or registers. If the computer has multiple execution units, it can usually do several instructions per clock cycle.
It is common to have specialized execution units. For example, a modestly priced computer might have only one floating-point execution unit, because floating point units are expensive. The same computer might have several integer units, because these are relatively inexpensive, and can do the bulk of instructions.
One kind of control unit for issuing uses an array of electronic logic, a "scoreboard" that detects when an instruction can be issued. The "height" of the array is the number of execution units, and the "length" and "width" are each the number of sources of operands. When all the items come together, the signals from the operands and execution unit will cross. The logic at this intersection detects that the instruction can work, so the instruction is "issued" to the free execution unit. An alternative style of issuing control unit implements the Tomasulo algorithm, which reorders a hardware queue of instructions. In some sense, both styles utilize a queue. The scoreboard is an alternative way to encode and reorder a queue of instructions, and some designers call it a queue table.
With some additional logic, a scoreboard can compactly combine execution reordering, register renaming and precise exceptions and interrupts. Further it can do this without the power-hungry, complex content-addressable memory used by the Tomasulo algorithm.
If the execution is slower than writing the results, the memory write-back queue always has free entries. But what if the memory writes slowly? Or what if the destination register will be used by an "earlier" instruction that has not yet issued? Then the write-back step of the instruction might need to be scheduled. This is sometimes called "retiring" an instruction. In this case, there must be scheduling logic on the back end of execution units. It schedules access to the registers or memory that will get the results.
Retiring logic can also be designed into an issuing scoreboard or a Tomasulo queue, by including memory or register access in the issuing logic.
Out of order controllers require special design features to handle interrupts. When there are several instructions in progress, it is not clear where in the instruction stream an interrupt occurs. For input and output interrupts, almost any solution works. However, when a computer has virtual memory, an interrupt occurs to indicate that a memory access failed. This memory access must be associated with an exact instruction and an exact processor state, so that the processor's state can be saved and restored by the interrupt. A usual solution preserves copies of registers until a memory access completes.
Also, out of order CPUs have even more problems with stalls from branching, because they can complete several instructions per clock cycle, and usually have many instructions in various stages of progress. So, these control units might use all of the solutions used by pipelined processors.
== Translating control units ==
Some computers translate each single instruction into a sequence of simpler instructions. The advantage is that an out of order computer can be simpler in the bulk of its logic, while handling complex multi-step instructions. x86 Intel CPUs since the Pentium Pro translate complex CISC x86 instructions to more RISC-like internal micro-operations.
In these, the "front" of the control unit manages the translation of instructions. Operands are not translated. The "back" of the CU is an out-of-order CPU that issues the micro-operations and operands to the execution units and data paths.
== Control units for low-powered computers ==
Many modern computers have controls that minimize power usage. In battery-powered computers, such as those in cell-phones, the advantage is longer battery life. In computers with utility power, the justification is to reduce the cost of power, cooling or noise.
Most modern computers use CMOS logic. CMOS wastes power in two common ways: By changing state, i.e. "active power", and by unintended leakage. The active power of a computer can be reduced by turning off control signals. Leakage current can be reduced by reducing the electrical pressure, the voltage, making the transistors with larger depletion regions or turning off the logic completely.
Active power is easier to reduce because data stored in the logic is not affected. The usual method reduces the CPU's clock rate. Most computer systems use this method. It is common for a CPU to idle during the transition to avoid side-effects from the changing clock.
Most computers also have a "halt" instruction. This was invented to stop non-interrupt code so that interrupt code has reliable timing. However, designers soon noticed that a halt instruction was also a good time to turn off a CPU's clock completely, reducing the CPU's active power to zero. The interrupt controller might continue to need a clock, but that usually uses much less power than the CPU.
These methods are relatively easy to design, and became so common that others were invented for commercial advantage. Many modern low-power CMOS CPUs stop and start specialized execution units and bus interfaces depending on the needed instruction. Some computers even arrange the CPU's microarchitecture to use transfer-triggered multiplexers so that each instruction only utilises the exact pieces of logic needed.
One common method is to spread the load to many CPUs, and turn off unused CPUs as the load reduces. The operating system's task switching logic saves the CPUs' data to memory. In some cases, one of the CPUs can be simpler and smaller, literally with fewer logic gates. So, it has low leakage, and it is the last to be turned off, and the first to be turned on. Also it then is the only CPU that requires special low-power features. A similar method is used in most PCs, which usually have an auxiliary embedded CPU that manages the power system. However, in PCs, the software is usually in the BIOS, not the operating system.
Theoretically, computers at lower clock speeds could also reduce leakage by reducing the voltage of the power supply. This affects the reliability of the computer in many ways, so the engineering is expensive, and it is uncommon except in relatively expensive computers such as PCs or cellphones.
Some designs can use very low leakage transistors, but these usually add cost. The depletion barriers of the transistors can be made larger to have less leakage, but this makes the transistor larger and thus both slower and more expensive. Some vendors use this technique in selected portions of an IC by constructing low leakage logic from large transistors that some processes provide for analog circuits. Some processes place the transistors above the surface of the silicon, in "fin fets", but these processes have more steps, so are more expensive. Special transistor doping materials (e.g. hafnium) can also reduce leakage, but this adds steps to the processing, making it more expensive. Some semiconductors have a larger band-gap than silicon. However, these materials and processes are currently (2020) more expensive than silicon.
Managing leakage is more difficult, because before the logic can be turned-off, the data in it must be moved to some type of low-leakage storage.
Some CPUs make use of a special type of flip-flop (to store a bit) that couples a fast, high-leakage storage cell to a slow, large (expensive) low-leakage cell. These two cells have separated power supplies. When the CPU enters a power saving mode (e.g. because of a halt that waits for an interrupt), data is transferred to the low-leakage cells, and the others are turned off. When the CPU leaves a low-leakage mode (e.g. because of an interrupt), the process is reversed.
Older designs would copy the CPU state to memory, or even disk, sometimes with specialized software. Very simple embedded systems sometimes just restart.
== Integrating with the Computer ==
All modern CPUs have control logic to attach the CPU to the rest of the computer. In modern computers, this is usually a bus controller. When an instruction reads or writes memory, the control unit either controls the bus directly, or controls a bus controller. Many modern computers use the same bus interface for memory, input and output. This is called "memory-mapped I/O". To a programmer, the registers of the I/O devices appear as numbers at specific memory addresses. x86 PCs use an older method, a separate I/O bus accessed by I/O instructions.
A modern CPU also tends to include an interrupt controller. It handles interrupt signals from the system bus. The control unit is the part of the computer that responds to the interrupts.
There is often a cache controller to cache memory. The cache controller and the associated cache memory is often the largest physical part of a modern, higher-performance CPU. When the memory, bus or cache is shared with other CPUs, the control logic must communicate with them to assure that no computer ever gets out-of-date old data.
Many historic computers built some type of input and output directly into the control unit. For example, many historic computers had a front panel with switches and lights directly controlled by the control unit. These let a programmer directly enter a program and debug it. In later production computers, the most common use of a front panel was to enter a small bootstrap program to read the operating system from disk. This was annoying. So, front panels were replaced by bootstrap programs in read-only memory.
Most PDP-8 models had a data bus designed to let I/O devices borrow the control unit's memory read and write logic. This reduced the complexity and expense of high speed I/O controllers, e.g. for disk.
The Xerox Alto had a multitasking microprogrammable control unit that performed almost all I/O. This design provided most of the features of a modern PC with only a tiny fraction of the electronic logic. The dual-thread computer was run by the two lowest-priority microthreads. These performed calculations whenever I/O was not required. High priority microthreads provided (in decreasing priority) video, network, disk, a periodic timer, mouse, and keyboard. The microprogram did the complex logic of the I/O device, as well as the logic to integrate the device with the computer. For the actual hardware I/O, the microprogram read and wrote shift registers for most I/O, sometimes with resistor networks and transistors to shift output voltage levels (e.g. for video). To handle outside events, the microcontroller had microinterrupts to switch threads at the end of a thread's cycle, e.g. at the end of an instruction, or after a shift-register was accessed. The microprogram could be rewritten and reinstalled, which was very useful for a research computer.
== Functions of the control unit ==
Thus a program of instructions in memory will cause the CU to configure a CPU's data flows to manipulate the data correctly between instructions. This results in a computer that could run a complete program and require no human intervention to make hardware changes between instructions (as had to be done when using only punch cards for computations before stored programmed computers with CUs were invented).
== Hardwired control unit ==
Hardwired control units are implemented through use of combinational logic units, featuring a finite number of gates that can generate specific results based on the instructions that were used to invoke those responses. Hardwired control units are generally faster than the microprogrammed designs.
This design uses a fixed architecture—it requires changes in the wiring if the instruction set is modified or changed. It can be convenient for simple, fast computers.
A controller that uses this approach can operate at high speed; however, it has little flexibility. A complex instruction set can overwhelm a designer who uses ad hoc logic design.
The hardwired approach has become less popular as computers have evolved. Previously, control units for CPUs used ad hoc logic, and they were difficult to design.
== Microprogram control unit ==
The idea of microprogramming was introduced by Maurice Wilkes in 1951 as an intermediate level to execute computer program instructions. Microprograms were organized as a sequence of microinstructions and stored in special control memory. The algorithm for the microprogram control unit, unlike the hardwired control unit, is usually specified by flowchart description. The main advantage of a microprogrammed control unit is the simplicity of its structure. Outputs from the controller are by microinstructions. The microprogram can be debugged and replaced similarly to software.
== Combination methods of design ==
A popular variation on microcode is to debug the microcode using a software simulator. Then, the microcode is a table of bits. This is a logical truth table, that translates a microcode address into the control unit outputs. This truth table can be fed to a computer program that produces optimized electronic logic. The resulting control unit is almost as easy to design as microprogramming, but it has the fast speed and low number of logic elements of a hard wired control unit. The practical result resembles a Mealy machine or Richards controller.
== See also ==
Processor design
Computer architecture
Richards controller
Controller (computing)
== References == | Wikipedia/Control_unit |
Monolithic microwave integrated circuit, or MMIC (sometimes pronounced "mimic"), is a type of integrated circuit (IC) device that operates at microwave frequencies (300 MHz to 300 GHz). These devices typically perform functions such as microwave mixing, power amplification, low-noise amplification, and high-frequency switching. Inputs and outputs on MMIC devices are frequently matched to a characteristic impedance of 50 ohms. This makes them easier to use, as cascading of MMICs does not then require an external matching network. Additionally, most microwave test equipment is designed to operate in a 50-ohm environment.
MMICs are dimensionally small (from around 1 mm2 to 10 mm2) and can be mass-produced, which has allowed the proliferation of high-frequency devices such as cellular phones. MMICs were originally fabricated using gallium arsenide (GaAs), a III-V compound semiconductor. It has two fundamental advantages over silicon (Si), the traditional material for IC realisation: device (transistor) speed and a semi-insulating substrate. Both factors help with the design of high-frequency circuit functions. However, the speed of Si-based technologies has gradually increased as transistor feature sizes have reduced, and MMICs can now also be fabricated in Si technology. The primary advantage of Si technology is its lower fabrication cost compared with GaAs. Silicon wafer diameters are larger (typically 8" to 12" compared with 4" to 8" for GaAs) and the wafer costs are lower, contributing to a less expensive IC.
Originally, MMICs used metal-semiconductor field-effect transistors (MESFETs) as the active device. More recently high-electron-mobility transistor (HEMTs), pseudomorphic HEMTs and heterojunction bipolar transistors have become common.
Other III-V technologies, such as indium phosphide (InP), have been shown to offer superior performance to GaAs in terms of gain, higher cutoff frequency, and low noise. However, they also tend to be more expensive due to smaller wafer sizes and increased material fragility.
Silicon germanium (SiGe) is a Si-based compound semiconductor technology offering higher-speed transistors than conventional Si devices but with similar cost advantages.
Gallium nitride (GaN) is also an option for MMICs. Because GaN transistors can operate at much higher temperatures and work at much higher voltages than GaAs transistors, they make ideal power amplifiers at microwave frequencies.
== See also ==
Hybrid integrated circuit
Transmission line
== References ==
Practical MMIC Design, Steve Marsh, published by Artech House ISBN 1-59693-036-5
RFIC and MMIC Design and Technology, editors I. D. Robertson and S. Lucyszyn, published by the IEE (London) ISBN 0-85296-786-1 | Wikipedia/Monolithic_microwave_integrated_circuit |
A silicon controlled rectifier or semiconductor controlled rectifier (SCR) is a four-layer solid-state current-controlling device. The name "silicon controlled rectifier" is General Electric's trade name for a type of thyristor. The principle of four-layer p–n–p–n switching was developed by Moll, Tanenbaum, Goldey, and Holonyak of Bell Laboratories in 1956. The practical demonstration of silicon controlled switching and detailed theoretical behavior of a device in agreement with the experimental results was presented by Dr Ian M. Mackintosh of Bell Laboratories in January 1958. The SCR was developed by a team of power engineers led by Gordon Hall
and commercialized by Frank W. "Bill" Gutzwiller in 1957.
Some sources define silicon-controlled rectifiers and thyristors as synonymous while other sources define silicon-controlled rectifiers as a proper subset of the set of thyristors; the latter being devices with at least four layers of alternating n- and p-type material. According to Bill Gutzwiller, the terms "SCR" and "controlled rectifier" were earlier, and "thyristor" was applied later, as usage of the device spread internationally.
SCRs are unidirectional devices (i.e. can conduct current only in one direction) as opposed to TRIACs, which are bidirectional (i.e. charge carriers can flow through them in either direction). SCRs can be triggered normally only by a positive current going into the gate as opposed to TRIACs, which can be triggered normally by either a positive or a negative current applied to its gate electrode.
== Modes of operation ==
There are three modes of operation for an SCR depending upon the biasing given to it:
Forward blocking mode (off state)
Forward conduction mode (on state)
Reverse blocking mode (off state)
=== Forward blocking mode ===
In this mode of operation, the anode (+, p-doped side) is given a positive voltage while the cathode (−, n-doped side) is given a negative voltage, keeping the gate at zero (0) potential i.e. disconnected. In this case junction J1 and J3 are forward-biased, while J2 is reverse-biased, allowing only a small leakage current from the anode to the cathode. When the applied voltage reaches the breakover value for J2, then J2 undergoes avalanche breakdown. At this breakover voltage J2 starts conducting, but below breakover voltage J2 offers very high resistance to the current and the SCR is said to be in the off state.
=== Forward conduction mode ===
An SCR can be brought from blocking mode to conduction mode in two ways: Either by increasing the voltage between anode and cathode beyond the breakover voltage, or by applying a positive pulse at the gate. Once the SCR starts conducting, no more gate voltage is required to maintain it in the ON state. The minimum current necessary to maintain the SCR in the ON state on removal of the gate voltage is called the latching current.
There are two ways to turn it off:
Reduce the current through it below a minimum value called the holding current, or
With the gate turned off, short-circuit the anode and cathode momentarily with a push-button switch or transistor across the junction.
=== Reverse blocking mode ===
When a negative voltage is applied to the anode and a positive voltage to the cathode, the SCR is in reverse blocking mode, making J1 and J3 reverse biased and J2 forward biased. The device behaves as two diodes connected in series. A small leakage current flows. This is the reverse blocking mode. If the reverse voltage is increased, then at critical breakdown level, called the reverse breakdown voltage (VBR), an avalanche occurs at J1 and J3 and the reverse current increases rapidly.
SCRs are available with reverse blocking capability, which adds to the forward voltage drop because of the need to have a long, low-doped P1 region. Usually, the reverse blocking voltage rating and forward blocking voltage rating are the same. The typical application for a reverse blocking SCR is in current-source inverters.
An SCR incapable of blocking reverse voltage is known as an asymmetrical SCR, abbreviated ASCR. It typically has a reverse breakdown rating in the tens of volts. ASCRs are used where either a reverse conducting diode is applied in parallel (for example, in voltage-source inverters) or where reverse voltage would never occur (for example, in switching power supplies or DC traction choppers).
Asymmetrical SCRs can be fabricated with a reverse conducting diode in the same package. These are known as RCTs, for reverse conducting thyristors.
== Thyristor turn-on methods ==
forward-voltage triggering
gate triggering
dv/dt triggering
thermal triggering
light triggering
Forward-voltage triggering occurs when the anode–cathode forward voltage is increased with the gate circuit opened. This is known as avalanche breakdown, during which junction J2 will break down. At sufficient voltages, the thyristor changes to its on state with low voltage drop and large forward current. In this case, J1 and J3 are already forward-biased.
In order for gate triggering to occur, the thyristor should be in the forward blocking state where the applied voltage is less than the breakdown voltage, otherwise forward-voltage triggering may occur. A single small positive voltage pulse can then be applied between the gate and the cathode. This supplies a single gate current pulse that turns the thyristor onto its on state. In practice, this is the most common method used to trigger a thyristor.
Temperature triggering occurs when the width of depletion region decreases as the temperature is increased. When the SCR is near VPO a very small increase in temperature causes junction J2 to be removed which triggers the device.
== Simple SCR circuit ==
A simple SCR circuit can be illustrated using an AC voltage source connected to a SCR with a resistive load. Without an applied current pulse to the gate of the SCR, the SCR is left in its forward blocking state. This makes the start of conduction of the SCR controllable. The delay angle α, which is the instant the gate current pulse is applied with respect to the instant of natural conduction (ωt = 0), controls the start of conduction. Once the SCR conducts, the SCR does not turn off until the current through the SCR, is, becomes negative. is stays zero until another gate current pulse is applied and SCR once again begins conducting.
== Applications ==
SCRs are mainly used in devices where the control of high power, possibly coupled with high voltage, is demanded. Their operation makes them suitable for use in medium- to high-voltage AC power control applications, such as lamp dimming, power regulators and motor control.
SCRs and similar devices are used for rectification of high-power AC in high-voltage direct current power transmission. They are also used in the control of welding machines, mainly gas tungsten arc welding and similar processes. It is used as an electronic switch in various devices. Early solid-state pinball machines made use of these to control lights, solenoids, and other functions electronically, instead of mechanically, hence the name solid-state.
Other applications include power switching circuits, controlled rectifiers, speed control of DC shunt motors, SCR crowbars, computer logic circuits, timing circuits, and inverters.
== Comparison with SCS ==
A silicon-controlled switch (SCS) behaves nearly the same way as an SCR; but there are a few differences. Unlike an SCR, an SCS switches off when a positive voltage/input current is applied to another anode gate lead. Unlike an SCR, an SCS can be triggered into conduction when a negative voltage/output current is applied to that same lead.
SCSs are useful in practically all circuits that need a switch that turns on/off through two distinct control pulses. This includes power-switching circuits, logic circuits, lamp drivers, and counters.
== Compared to TRIACs ==
A TRIAC resembles an SCR in that both act as electrically controlled switches. Unlike an SCR, a TRIAC can pass current in either direction. Thus, TRIACs are particularly useful for AC applications. TRIACs have three leads: a gate lead and two conducting leads, referred to as MT1 and MT2. If no current/voltage is applied to the gate lead, the TRIAC switches off. On the other hand, if the trigger voltage is applied to the gate lead, the TRIAC switches on.
TRIACs are suitable for light-dimming circuits, phase-control circuits, AC power-switching circuits, AC motor control circuits, etc.
== See also ==
Bipolar junction transistor (BJT)
Crowbar (circuit)
DIAC
Gate turn-off thyristor
High-voltage direct current
Insulated-gate bipolar transistor
Integrated gate-commutated thyristor
Snubber
Voltage regulator
== References ==
== Further reading ==
ON Semiconductor (November 2006). Thyristor Theory and Design Considerations (PDF) (rev.1, HBD855/D ed.). p. 240.
G. K. Mithal. Industrial and Power Electronics.
K. B. Khanchandani. Power Electronics.
== External links ==
SCR at AllAboutCircuits
SCR Circuit Design | Wikipedia/Silicon_controlled_rectifier |
The Electronic System Design Alliance (ESD Alliance) is the international association of companies that provide tools and services for electronic design automation. Until 2016 it was known as the Electronic Design Automation Consortium (EDA Consortium, EDAC). In 2018, the ESD Alliance became a SEMI Technology Community.
It defines itself as "a forum to address technical, marketing, economic and legislative issues affecting the entire industry. It acts as the central voice to communicate and promote the value of the semiconductor design industry as a vital component of the global electronics industry".
The 2016 name change reflects the expansion of its charter to address the changes in the industry towards a more system-oriented approach, embracing both integrated circuits design (its past focus) and electronic systems design.
The organization, then known as EDAC, was established in 1987 and incorporated in 1992.
In 1994 the organization established the Phil Kaufman Award to recognize people by their contributions to electronic design automation.
== References == | Wikipedia/Electronic_Design_Automation_Consortium |
Tomasulo's algorithm is a computer architecture hardware algorithm for dynamic scheduling of instructions that allows out-of-order execution and enables more efficient use of multiple execution units. It was developed by Robert Tomasulo at IBM in 1967 and was first implemented in the IBM System/360 Model 91’s floating point unit.
The major innovations of Tomasulo’s algorithm include register renaming in hardware, reservation stations for all execution units, and a common data bus (CDB) on which computed values broadcast to all reservation stations that may need them. These developments allow for improved parallel execution of instructions that would otherwise stall under the use of scoreboarding or other earlier algorithms.
Robert Tomasulo received the Eckert–Mauchly Award in 1997 for his work on the algorithm.
== Implementation concepts ==
The following are the concepts necessary to the implementation of Tomasulo's algorithm:
=== Common data bus ===
The Common Data Bus (CDB) connects reservation stations directly to functional units. According to Tomasulo it "preserves precedence while encouraging concurrency".: 33 This has two important effects:
Functional units can access the result of any operation without involving a floating-point-register, allowing multiple units waiting on a result to proceed without waiting to resolve contention for access to register file read ports.
Hazard Detection and control execution are distributed. The reservation stations control when an instruction can execute, rather than a single dedicated hazard unit.
=== Instruction order ===
Instructions are issued sequentially so that the effects of a sequence of instructions, such as exceptions raised by these instructions, occur in the same order as they would on an in-order processor, regardless of the fact that they are being executed out-of-order (i.e. non-sequentially).
=== Register renaming ===
Tomasulo's algorithm uses register renaming to correctly perform out-of-order execution. All general-purpose and reservation station registers hold either a real value or a placeholder value. If a real value is unavailable to a destination register during the issue stage, a placeholder value is initially used. The placeholder value is a tag indicating which reservation station will produce the real value. When the unit finishes and broadcasts the result on the CDB, the placeholder will be replaced with the real value.
Each functional unit has a single reservation station. Reservation stations hold information needed to execute a single instruction, including the operation and the operands. The functional unit begins processing when it is free and when all source operands needed for an instruction are real.
=== Exceptions ===
Practically speaking, there may be exceptions for which not enough status information about an exception is available, in which case the processor may raise a special exception, called an imprecise exception. Imprecise exceptions cannot occur in in-order implementations, as processor state is changed only in program order (see Classic RISC pipeline § Exceptions).
Programs that experience precise exceptions, where the specific instruction that took the exception can be determined, can restart or re-execute at the point of the exception. However, those that experience imprecise exceptions generally cannot restart or re-execute, as the system cannot determine the specific instruction that took the exception.
== Instruction lifecycle ==
The three stages listed below are the stages through which each instruction passes from the time it is issued to the time its execution is complete.
=== Legend ===
RS - Reservation Status
RegisterStat - Register Status; contains information about the registers.
regs[x] - Value of register x
Mem[A] - Value of memory at address A
rd - destination register number
rs, rt - source registration numbers
imm - sign extended immediate field
r - reservation station or buffer that the instruction is assigned to
==== Reservation Station Fields ====
Op - represents the operation being performed on operands
Qj, Qk - the reservation station that will produce the relevant source operand (0 indicates the value is in Vj, Vk)
Vj, Vk - the value of the source operands
A - used to hold the memory address information for a load or store
Busy - 1 if occupied, 0 if not occupied
==== Register Status Fields ====
Qi - the reservation station whose result should be stored in this register (if blank or 0, no values are destined for this register)
=== Stage 1: issue ===
In the issue stage, instructions are issued for execution if all operands and reservation stations are ready or else they are stalled. Registers are renamed in this step, eliminating WAR and WAW hazards.
Retrieve the next instruction from the head of the instruction queue. If the instruction operands are currently in the registers, then
If a matching functional unit is available, issue the instruction.
Else, as there is no available functional unit, stall the instruction until a station or buffer is free.
Otherwise, we can assume the operands are not in the registers, and so use virtual values. The functional unit must calculate the real value to keep track of the functional units that produce the operand.
=== Stage 2: execute ===
In the execute stage, the instruction operations are carried out. Instructions are delayed in this step until all of their operands are available, eliminating RAW hazards. Program correctness is maintained through effective address calculation to prevent hazards through memory.
If one or more of the operands is not yet available then: wait for operand to become available on the CDB.
When all operands are available, then: if the instruction is a load or store
Compute the effective address when the base register is available, and place it in the load/store buffer
If the instruction is a load then: execute as soon as the memory unit is available
Else, if the instruction is a store then: wait for the value to be stored before sending it to the memory unit
Else, the instruction is an arithmetic logic unit (ALU) operation then: execute the instruction at the corresponding functional unit
=== Stage 3: write result ===
In the write Result stage, ALU operations results are written back to registers and store operations are written back to memory.
If the instruction was an ALU operation
If the result is available, then: write it on the CDB and from there into the registers and any reservation stations waiting for this result
Else, if the instruction was a store then: write the data to memory during this step
== Algorithm improvements ==
The concepts of reservation stations, register renaming, and the common data bus in Tomasulo's algorithm presents significant advancements in the design of high-performance computers.
Reservation stations take on the responsibility of waiting for operands in the presence of data dependencies and other inconsistencies such as varying storage access time and circuit speeds, thus freeing up the functional units. This improvement overcomes long floating point delays and memory accesses. In particular the algorithm is more tolerant of cache misses. Additionally, programmers are freed from implementing optimized code. This is a result of the common data bus and reservation station working together to preserve dependencies as well as encouraging concurrency.: 33
By tracking operands for instructions in the reservation stations and register renaming in hardware the algorithm minimizes read-after-write (RAW) and eliminates write-after-write (WAW) and Write-after-Read (WAR) computer architecture hazards. This improves performance by reducing wasted time that would otherwise be required for stalls.: 33
An equally important improvement in the algorithm is the design is not limited to a specific pipeline structure. This improvement allows the algorithm to be more widely adopted by multiple-issue processors. Additionally, the algorithm is easily extended to enable branch speculation. : 182
== Applications and legacy ==
Tomasulo's algorithm was implemented in the System/360 Model 91 architecture. Outside of IBM, it went unused for several years. However, it saw a vast increase in usage during the 1990s for 3 reasons:
Once caches became commonplace, the algorithm's ability to maintain concurrency during unpredictable load times caused by cache misses became valuable in processors.
Dynamic scheduling and branch speculation from the algorithm enables improved performance as processors issued more and more instructions.
Proliferation of mass-market software meant that programmers would not want to compile for a specific pipeline structure. The algorithm can function with any pipeline architecture and thus software requires few architecture-specific modifications. : 183
Many modern processors implement dynamic scheduling schemes that are variants of Tomasulo's original algorithm, including popular Intel x86-64 chips.
== See also ==
Re-order buffer (ROB)
Instruction-level parallelism (ILP)
== References ==
== Further reading ==
Savard, John J. G. (2018) [2014]. "Pipelined and Out-of-Order Execution". quadibloc. Archived from the original on 2018-07-03. Retrieved 2018-07-16.
== External links ==
Dynamic Scheduling - Tomasulo's Algorithm at the Wayback Machine (archived December 25, 2017)
HASE Java applet simulation of the Tomasulo's algorithm | Wikipedia/Tomasulo's_algorithm |
Forensic science, often confused with criminalistics, is the application of science principles and methods to support legal decision-making in matters of criminal and civil law.
During criminal investigation in particular, it is governed by the legal standards of admissible evidence and criminal procedure. It is a broad field utilizing numerous practices such as the analysis of DNA, fingerprints, bloodstain patterns, firearms, ballistics, toxicology, microscopy, and fire debris analysis.
Forensic scientists collect, preserve, and analyze evidence during the course of an investigation. While some forensic scientists travel to the scene of the crime to collect the evidence themselves, others occupy a laboratory role, performing analysis on objects brought to them by other individuals. Others are involved in analysis of financial, banking, or other numerical data for use in financial crime investigation, and can be employed as consultants from private firms, academia, or as government employees.
In addition to their laboratory role, forensic scientists testify as expert witnesses in both criminal and civil cases and can work for either the prosecution or the defense. While any field could technically be forensic, certain sections have developed over time to encompass the majority of forensically related cases.
== Etymology ==
The term forensic stems from the Latin word, forēnsis (3rd declension, adjective), meaning "of a forum, place of assembly". The history of the term originates in Roman times, when a criminal charge meant presenting the case before a group of public individuals in the forum. Both the person accused of the crime and the accuser would give speeches based on their sides of the story. The case would be decided in favor of the individual with the best argument and delivery. This origin is the source of the two modern usages of the word forensic—as a form of legal evidence; and as a category of public presentation.
In modern use, the term forensics is often used in place of "forensic science."
The word "science", is derived from the Latin word for 'knowledge' and is today closely tied to the scientific method, a systematic way of acquiring knowledge. Taken together, forensic science means the use of scientific methods and processes for crime solving.
== History ==
=== Origins of forensic science and early methods ===
The ancient world lacked standardized forensic practices, which enabled criminals to escape punishment. Criminal investigations and trials relied heavily on forced confessions and witness testimony. However, ancient sources do contain several accounts of techniques that foreshadow concepts in forensic science developed centuries later.
The first written account of using medicine and entomology to solve criminal cases is attributed to the book of Xi Yuan Lu (translated as Washing Away of Wrongs), written in China in 1248 by Song Ci (宋慈, 1186–1249), a director of justice, jail and supervision, during the Song dynasty.
Song Ci introduced regulations concerning autopsy reports to court, how to protect the evidence in the examining process, and explained why forensic workers must demonstrate impartiality to the public. He devised methods for making antiseptic and for promoting the reappearance of hidden injuries to dead bodies and bones (using sunlight and vinegar under a red-oil umbrella); for calculating the time of death (allowing for weather and insect activity); described how to wash and examine the dead body to ascertain the reason for death. At that time the book had described methods for distinguishing between suicide and faked suicide. He wrote the book on forensics stating that all wounds or dead bodies should be examined, not avoided. The book became the first form of literature to help determine the cause of death.
In one of Song Ci's accounts (Washing Away of Wrongs), the case of a person murdered with a sickle was solved by an investigator who instructed each suspect to bring his sickle to one location. (He realized it was a sickle by testing various blades on an animal carcass and comparing the wounds.) Flies, attracted by the smell of blood, eventually gathered on a single sickle. In light of this, the owner of that sickle confessed to the murder. The book also described how to distinguish between a drowning (water in the lungs) and strangulation (broken neck cartilage), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident.
Methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the Polygraph test. In ancient India, some suspects were made to fill their mouths with dried rice and spit it back out. Similarly, in ancient China, those accused of a crime would have rice powder placed in their mouths. In ancient middle-eastern cultures, the accused were made to lick hot metal rods briefly. It is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva.
== Education and training ==
Initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data-flow management software. However, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. In doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. Instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces—remnants of criminal activity. Embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners' mindset to accept concepts and methodologies in forensic intelligence.
Recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, underscore the necessity for the establishment of educational and training initiatives in the field of forensic intelligence. This article contends that a discernible gap exists between the perceived and actual comprehension of forensic intelligence among law enforcement and forensic science managers, positing that this asymmetry can be rectified only through educational interventions.
The primary challenge in forensic intelligence education and training is identified as the formulation of programs aimed at heightening awareness, particularly among managers, to mitigate the risk of making suboptimal decisions in information processing. The paper highlights two recent European courses as exemplars of educational endeavors, elucidating lessons learned and proposing future directions.
The overarching conclusion is that the heightened focus on forensic intelligence has the potential to rejuvenate a proactive approach to forensic science, enhance quantifiable efficiency, and foster greater involvement in investigative and managerial decision-making. A novel educational challenge is articulated for forensic science university programs worldwide: a shift in emphasis from a fragmented criminal trace analysis to a more comprehensive security problem-solving approach.
=== Development of forensic science ===
In 16th-century Europe, medical practitioners in army and university settings began to gather information on the cause and manner of death. Ambroise Paré, a French army surgeon, systematically studied the effects of violent death on internal organs. Two Italian surgeons, Fortunato Fidelis and Paolo Zacchia, laid the foundation of modern pathology by studying changes that occurred in the structure of the body as the result of disease. In the late 18th century, writings on these topics began to appear. These included A Treatise on Forensic Medicine and Public Health by the French physician François-Emmanuel Fodéré and The Complete System of Police Medicine by the German medical expert Johann Peter Frank.
As the rational values of the Enlightenment era increasingly permeated society in the 18th century, criminal investigation became a more evidence-based, rational procedure − the use of torture to force confessions was curtailed, and belief in witchcraft and other powers of the occult largely ceased to influence the court's decisions. Two examples of English forensic science in individual legal proceedings demonstrate the increasing use of logic and procedure in criminal investigations at the time. In 1784, in Lancaster, John Toms was tried and convicted for murdering Edward Culshaw with a pistol. When the dead body of Culshaw was examined, a pistol wad (crushed paper used to secure powder and balls in the muzzle) found in his head wound matched perfectly with a torn newspaper found in Toms's pocket, leading to the conviction.
In Warwick 1816, a farm laborer was tried and convicted of the murder of a young maidservant. She had been drowned in a shallow pool and bore the marks of violent assault. The police found footprints and an impression from corduroy cloth with a sewn patch in the damp earth near the pool. There were also scattered grains of wheat and chaff. The breeches of a farm labourer who had been threshing wheat nearby were examined and corresponded exactly to the impression in the earth near the pool.
An article appearing in Scientific American in 1885 describes the use of microscopy to distinguish between the blood of two persons in a criminal case in Chicago.
=== Chromatography ===
Chromatography is a common technique used in the field of Forensic Science. Chromatography is a method of separating the components of a mixture from a mobile phase. Chromatography is an essential tool used in forensic science, helping analysts identify and compare trace amounts of samples including ignitable liquids, drugs, and biological samples. Many laboratories utilize gas chromatography/mass spectrometry (GC/MS) to examine these kinds of samples; this analysis provides rapid and reliant data to identify samples in question.
=== Toxicology ===
A method for detecting arsenious oxide, simple arsenic, in corpses was devised in 1773 by the Swedish chemist, Carl Wilhelm Scheele. His work was expanded upon, in 1806, by German chemist Valentin Ross, who learned to detect the poison in the walls of a victim's stomach. Toxicology, a subfield of forensic chemistry, focuses on detecting and identifying drugs, poisons, and other toxic substances in biological samples. Forensic toxicologists work on cases involving drug overdoses, poisoning, and substance abuse. Their work is critical in determining whether harmful substances play a role in a person’s death or impairment. read more
James Marsh was the first to apply this new science to the art of forensics. He was called by the prosecution in a murder trial to give evidence as a chemist in 1832. The defendant, John Bodle, was accused of poisoning his grandfather with arsenic-laced coffee. Marsh performed the standard test by mixing a suspected sample with hydrogen sulfide and hydrochloric acid. While he was able to detect arsenic as yellow arsenic trisulfide, when it was shown to the jury it had deteriorated, allowing the suspect to be acquitted due to reasonable doubt.
Annoyed by that, Marsh developed a much better test. He combined a sample containing arsenic with sulfuric acid and arsenic-free zinc, resulting in arsine gas. The gas was ignited, and it decomposed to pure metallic arsenic, which, when passed to a cold surface, would appear as a silvery-black deposit. So sensitive was the test, known formally as the Marsh test, that it could detect as little as one-fiftieth of a milligram of arsenic. He first described this test in The Edinburgh Philosophical Journal in 1836.
=== Ballistics and firearms ===
Ballistics is "the science of the motion of projectiles in flight". In forensic science, analysts examine the patterns left on bullets and cartridge casings after being ejected from a weapon. When fired, a bullet is left with indentations and markings that are unique to the barrel and firing pin of the firearm that ejected the bullet. This examination can help scientists identify possible makes and models of weapons connected to a crime.
Henry Goddard at Scotland Yard pioneered the use of bullet comparison in 1835. He noticed a flaw in the bullet that killed the victim and was able to trace this back to the mold that was used in the manufacturing process.
=== Anthropometry ===
The French police officer Alphonse Bertillon was the first to apply the anthropological technique of anthropometry to law enforcement, thereby creating an identification system based on physical measurements. Before that time, criminals could be identified only by name or photograph. Dissatisfied with the ad hoc methods used to identify captured criminals in France in the 1870s, he began his work on developing a reliable system of anthropometrics for human classification.
Bertillon created many other forensics techniques, including forensic document examination, the use of galvanoplastic compounds to preserve footprints, ballistics, and the dynamometer, used to determine the degree of force used in breaking and entering. Although his central methods were soon to be supplanted by fingerprinting, "his other contributions like the mug shot and the systematization of crime-scene photography remain in place to this day."
=== Fingerprints ===
Sir William Herschel was one of the first to advocate the use of fingerprinting in the identification of criminal suspects. While working for the Indian Civil Service, he began to use thumbprints on documents as a security measure to prevent the then-rampant repudiation of signatures in 1858.
In 1877 at Hooghly (near Kolkata), Herschel instituted the use of fingerprints on contracts and deeds, and he registered government pensioners' fingerprints to prevent the collection of money by relatives after a pensioner's death.
In 1880, Henry Faulds, a Scottish surgeon in a Tokyo hospital, published his first paper on the subject in the scientific journal Nature, discussing the usefulness of fingerprints for identification and proposing a method to record them with printing ink. He established their first classification and was also the first to identify fingerprints left on a vial. Returning to the UK in 1886, he offered the concept to the Metropolitan Police in London, but it was dismissed at that time.
Faulds wrote to Charles Darwin with a description of his method, but, too old and ill to work on it, Darwin gave the information to his cousin, Francis Galton, who was interested in anthropology. Having been thus inspired to study fingerprints for ten years, Galton published a detailed statistical model of fingerprint analysis and identification and encouraged its use in forensic science in his book Finger Prints. He had calculated that the chance of a "false positive" (two different individuals having the same fingerprints) was about 1 in 64 billion.
Juan Vucetich, an Argentine chief police officer, created the first method of recording the fingerprints of individuals on file. In 1892, after studying Galton's pattern types, Vucetich set up the world's first fingerprint bureau. In that same year, Francisca Rojas of Necochea was found in a house with neck injuries whilst her two sons were found dead with their throats cut. Rojas accused a neighbour, but despite brutal interrogation, this neighbour would not confess to the crimes. Inspector Alvarez, a colleague of Vucetich, went to the scene and found a bloody thumb mark on a door. When it was compared with Rojas' prints, it was found to be identical with her right thumb. She then confessed to the murder of her sons.
A Fingerprint Bureau was established in Calcutta (Kolkata), India, in 1897, after the Council of the Governor General approved a committee report that fingerprints should be used for the classification of criminal records. Working in the Calcutta Anthropometric Bureau, before it became the Fingerprint Bureau, were Azizul Haque and Hem Chandra Bose. Haque and Bose were Indian fingerprint experts who have been credited with the primary development of a fingerprint classification system eventually named after their supervisor, Sir Edward Richard Henry. The Henry Classification System, co-devised by Haque and Bose, was accepted in England and Wales when the first United Kingdom Fingerprint Bureau was founded in Scotland Yard, the Metropolitan Police headquarters, London, in 1901. Sir Edward Richard Henry subsequently achieved improvements in dactyloscopy.
In the United States, Henry P. DeForrest used fingerprinting in the New York Civil Service in 1902, and by December 1905, New York City Police Department Deputy Commissioner Joseph A. Faurot, an expert in the Bertillon system and a fingerprint advocate at Police Headquarters, introduced the fingerprinting of criminals to the United States.
=== Uhlenhuth test ===
The Uhlenhuth test, or the antigen–antibody precipitin test for species, was invented by Paul Uhlenhuth in 1901 and could distinguish human blood from animal blood, based on the discovery that the blood of different species had one or more characteristic proteins. The test represented a major breakthrough and came to have tremendous importance in forensic science. The test was further refined for forensic use by the Swiss chemist Maurice Müller in the year 1960s.
=== DNA ===
Forensic DNA analysis was first used in 1984. It was developed by Sir Alec Jeffreys, who realized that variation in the genetic sequence could be used to identify individuals and to tell individuals apart from one another. The first application of DNA profiles was used by Jeffreys in a double murder mystery in the small English town of Narborough, Leicestershire, in 1985. A 15-year-old school girl by the name of Lynda Mann was raped and murdered in Carlton Hayes psychiatric hospital. The police did not find a suspect but were able to obtain a semen sample.
In 1986, Dawn Ashworth, 15 years old, was also raped and strangled in the nearby village of Enderby. Forensic evidence showed that both killers had the same blood type. Richard Buckland became the suspect because he worked at Carlton Hayes psychiatric hospital, had been spotted near Dawn Ashworth's murder scene and knew unreleased details about the body. He later confessed to Dawn's murder but not Lynda's. Jefferys was brought into the case to analyze the semen samples. He concluded that there was no match between the samples and Buckland, who became the first person to be exonerated using DNA. Jefferys confirmed that the DNA profiles were identical for the two murder semen samples. To find the perpetrator, DNA samples from the entire male population, more than 4,000 aged from 17 to 34, of the town were collected. They all were compared to semen samples from the crime. A friend of Colin Pitchfork was heard saying that he had given his sample to the police claiming to be Colin. Colin Pitchfork was arrested in 1987 and it was found that his DNA profile matched the semen samples from the murder.
Because of this case, DNA databases were developed. There is the national (FBI) and international databases as well as the European countries (ENFSI: European Network of Forensic Science Institutes). These searchable databases are used to match crime scene DNA profiles to those already in a database.
=== Maturation ===
By the turn of the 20th century, the science of forensics had become largely established in the sphere of criminal investigation. Scientific and surgical investigation was widely employed by the Metropolitan Police during their pursuit of the mysterious Jack the Ripper, who had killed a number of women in the 1880s. This case is a watershed in the application of forensic science. Large teams of policemen conducted house-to-house inquiries throughout Whitechapel. Forensic material was collected and examined. Suspects were identified, traced and either examined more closely or eliminated from the inquiry. Police work follows the same pattern today. Over 2000 people were interviewed, "upwards of 300" people were investigated, and 80 people were detained.
The investigation was initially conducted by the Criminal Investigation Department (CID), headed by Detective Inspector Edmund Reid. Later, Detective Inspectors Frederick Abberline, Henry Moore, and Walter Andrews were sent from Central Office at Scotland Yard to assist. Initially, butchers, surgeons and physicians were suspected because of the manner of the mutilations. The alibis of local butchers and slaughterers were investigated, with the result that they were eliminated from the inquiry. Some contemporary figures thought the pattern of the murders indicated that the culprit was a butcher or cattle drover on one of the cattle boats that plied between London and mainland Europe. Whitechapel was close to the London Docks, and usually such boats docked on Thursday or Friday and departed on Saturday or Sunday. The cattle boats were examined, but the dates of the murders did not coincide with a single boat's movements, and the transfer of a crewman between boats was also ruled out.
At the end of October, Robert Anderson asked police surgeon Thomas Bond to give his opinion on the extent of the murderer's surgical skill and knowledge. The opinion offered by Bond on the character of the "Whitechapel murderer" is the earliest surviving offender profile. Bond's assessment was based on his own examination of the most extensively mutilated victim and the post mortem notes from the four previous canonical murders. In his opinion the killer must have been a man of solitary habits, subject to "periodical attacks of homicidal and erotic mania", with the character of the mutilations possibly indicating "satyriasis". Bond also stated that "the homicidal impulse may have developed from a revengeful or brooding condition of the mind, or that religious mania may have been the original disease but I do not think either hypothesis is likely".
Handbook for Coroners, police officials, military policemen was written by the Austrian criminal jurist Hans Gross in 1893, and is generally acknowledged as the birth of the field of criminalistics. The work combined in one system fields of knowledge that had not been previously integrated, such as psychology and physical science, and which could be successfully used against crime. Gross adapted some fields to the needs of criminal investigation, such as crime scene photography. He went on to found the Institute of Criminalistics in 1912, as part of the University of Graz' Law School. This Institute was followed by many similar institutes all over the world.
In 1909, Archibald Reiss founded the Institut de police scientifique of the University of Lausanne (UNIL), the first school of forensic science in the world. Dr. Edmond Locard, became known as the "Sherlock Holmes of France". He formulated the basic principle of forensic science: "Every contact leaves a trace", which became known as Locard's exchange principle. In 1910, he founded what may have been the first criminal laboratory in the world, after persuading the Police Department of Lyon (France) to give him two attic rooms and two assistants.
Symbolic of the newfound prestige of forensics and the use of reasoning in detective work was the popularity of the fictional character Sherlock Holmes, written by Arthur Conan Doyle in the late 19th century. He remains a great inspiration for forensic science, especially for the way his acute study of a crime scene yielded small clues as to the precise sequence of events. He made great use of trace evidence such as shoe and tire impressions, as well as fingerprints, ballistics and handwriting analysis, now known as questioned document examination. Such evidence is used to test theories conceived by the police, for example, or by the investigator himself. All of the techniques advocated by Holmes later became reality, but were generally in their infancy at the time Conan Doyle was writing. In many of his reported cases, Holmes frequently complains of the way the crime scene has been contaminated by others, especially by the police, emphasising the critical importance of maintaining its integrity, a now well-known feature of crime scene examination. He used analytical chemistry for blood residue analysis as well as toxicology examination and determination for poisons. He used ballistics by measuring bullet calibres and matching them with a suspected murder weapon.
=== Late 19th – early 20th century figures ===
Hans Gross applied scientific methods to crime scenes and was responsible for the birth of criminalistics.
Edmond Locard expanded on Gross' work with Locard's exchange principle which stated "whenever two objects come into contact with one another, materials are exchanged between them". This means that every contact by a criminal leaves a trace.
Alexandre Lacassagne, who taught Locard, produced autopsy standards on actual forensic cases.
Alphonse Bertillon was a French criminologist and founder of Anthropometry (scientific study of measurements and proportions of the human body). He used anthropometry for identification, stating that, since each individual is unique, by measuring aspects of physical difference there could be a personal identification system. He created the Bertillon System around 1879, a way of identifying criminals and citizens by measuring 20 parts of the body. In 1884, over 240 repeat offenders were caught using the Bertillon system, but the system was largely superseded by fingerprinting.
Joseph Thomas Walker, known for his work at Massachusetts State Police Chemical Laboratory, for developing many modern forensic techniques which he frequently published in academic journals, and for teaching at the Department of Legal Medicine, Harvard University.
Frances Glessner Lee, known as "the mother of forensic science", was instrumental in the development of forensic science in the US. She lobbied to have coroners replaced by medical professionals, endowed the Harvard Associates in Police Science, and conducted many seminars to educate homicide investigators. She also created the Nutshell Studies of Unexplained Death, intricate crime scene dioramas used to train investigators, which are still in use today.
=== 20th century ===
Later in the 20th century several British pathologists, Mikey Rochman, Francis Camps, Sydney Smith and Keith Simpson pioneered new forensic science methods. Alec Jeffreys pioneered the use of DNA profiling in forensic science in 1984. He realized the scope of DNA fingerprinting, which uses variations in the genetic code to identify individuals. The method has since become important in forensic science to assist police detective work, and it has also proved useful in resolving paternity and immigration disputes. DNA fingerprinting was first used as a police forensic test to identify the rapist and killer of two teenagers, Lynda Mann and Dawn Ashworth, who were both murdered in Narborough, Leicestershire, in 1983 and 1986 respectively. Colin Pitchfork was identified and convicted of murder after samples taken from him matched semen samples taken from the two dead girls.
Forensic science has been fostered by a number of national and international forensic science learned bodies including the American Academy of Forensic Sciences (founded 1948), publishers of the Journal of Forensic Sciences; the Canadian Society of Forensic Science (founded 1953), publishers of the Journal of the Canadian Society of Forensic Science; the Chartered Society of Forensic Sciences, (founded 1959), then known as the Forensic Science Society, publisher of Science & Justice; the British Academy of Forensic Sciences (founded 1960), publishers of Medicine, Science and the Law; the Australian Academy of Forensic Sciences (founded 1967), publishers of the Australian Journal of Forensic Sciences; and the European Network of Forensic Science Institutes (founded 1995).
=== 21st century ===
In the past decade, documenting forensics scenes has become more efficient. Forensic scientists have started using laser scanners, drones and photogrammetry to obtain 3D point clouds of accidents or crime scenes. Reconstruction of an accident scene on a highway using drones involves data acquisition time of only 10–20 minutes and can be performed without shutting down traffic. The results are not just accurate, in centimeters, for measurement to be presented in court but also easy to digitally preserve in the long term.
Now, in the 21st century, much of forensic science's future is up for discussion. The National Institute of Standards and Technology (NIST) has several forensic science-related programs: CSAFE, a NIST Center of Excellence in Forensic Science, the National Commission on Forensic Science (now concluded), and administration of the Organization of Scientific Area Committees for Forensic Science (OSAC). One of the more recent additions by NIST is a document called NISTIR-7941, titled "Forensic Science Laboratories: Handbook for Facility Planning, Design, Construction, and Relocation". The handbook provides a clear blueprint for approaching forensic science. The details even include what type of staff should be hired for certain positions.
== Subdivisions ==
Art forensics concerns the art authentication cases to help research the work's authenticity. Art authentication methods are used to detect and identify forgery, faking and copying of art works, e.g. paintings.
Bloodstain pattern analysis is the scientific examination of blood spatter patterns found at a crime scene to reconstruct the events of the crime.
Comparative forensics is the application of visual comparison techniques to verify similarity of physical evidence. This includes fingerprint analysis, toolmark analysis, and ballistic analysis.
Computational forensics concerns the development of algorithms and software to assist forensic examination.
Criminalistics is the application of various sciences to answer questions relating to examination and comparison of biological evidence, trace evidence, impression evidence (such as fingerprints, footwear impressions, and tire tracks), controlled substances, ballistics, firearm and toolmark examination, and other evidence in criminal investigations. In typical circumstances, evidence is processed in a crime lab.
Digital forensics is the application of proven scientific methods and techniques in order to recover data from electronic / digital media. Digital Forensic specialists work in the field as well as in the lab.
Ear print analysis is used as a means of forensic identification intended as an identification tool similar to fingerprinting. An earprint is a two-dimensional reproduction of the parts of the outer ear that have touched a specific surface (most commonly the helix, antihelix, tragus and antitragus).
Election forensics is the use of statistics to determine if election results are normal or abnormal. It is also used to look into and detect the cases concerning gerrymandering.
Forensic accounting is the study and interpretation of accounting evidence, financial statement namely: Balance sheet, Income statement, Cash flow statement.
Forensic aerial photography is the study and interpretation of aerial photographic evidence.
Forensic anthropology is the application of physical anthropology in a legal setting, usually for the recovery and identification of skeletonized human remains.
Forensic archaeology is the application of a combination of archaeological techniques and forensic science, typically in law enforcement.
Forensic astronomy uses methods from astronomy to determine past celestial constellations for forensic purposes.
Forensic botany is the study of plant life in order to gain information regarding possible crimes.
Forensic chemistry is the study of detection and identification of illicit drugs, accelerants used in arson cases, explosive and gunshot residue.
Forensic dactyloscopy is the study of fingerprints.
Forensic document examination or questioned document examination answers questions about a disputed document using a variety of scientific processes and methods. Many examinations involve a comparison of the questioned document, or components of the document, with a set of known standards. The most common type of examination involves handwriting, whereby the examiner tries to address concerns about potential authorship.
Forensic DNA analysis takes advantage of the uniqueness of an individual's DNA to answer forensic questions such as paternity/maternity testing and placing a suspect at a crime scene, e.g. in a rape investigation.
Forensic engineering is the scientific examination and analysis of structures and products relating to their failure or cause of damage.
Forensic entomology deals with the examination of insects in, on and around human remains to assist in determination of time or location of death. It is also possible to determine if the body was moved after death using entomology.
Forensic geology deals with trace evidence in the form of soils, minerals and petroleum.
Forensic geomorphology is the study of the ground surface to look for potential location(s) of buried object(s).
Forensic geophysics is the application of geophysical techniques such as radar for detecting objects hidden underground or underwater.
Forensic intelligence process starts with the collection of data and ends with the integration of results within into the analysis of crimes under investigation.
Forensic interviews are conducted using the science of professionally using expertise to conduct a variety of investigative interviews with victims, witnesses, suspects or other sources to determine the facts regarding suspicions, allegations or specific incidents in either public or private sector settings.
Forensic histopathology is the application of histological techniques and examination to forensic pathology practice.
Forensic limnology is the analysis of evidence collected from crime scenes in or around fresh-water sources. Examination of biological organisms, in particular diatoms, can be useful in connecting suspects with victims.
Forensic linguistics deals with issues in the legal system that requires linguistic expertise.
Forensic meteorology is a site-specific analysis of past weather conditions for a point of loss.
Forensic metrology is the application of metrology to assess the reliability of scientific evidence obtained through measurements
Forensic microbiology is the study of the necrobiome.
Forensic nursing is the application of Nursing sciences to abusive crimes, like child abuse, or sexual abuse. Categorization of wounds and traumas, collection of bodily fluids and emotional support are some of the duties of forensic nurses.
Forensic odontology is the study of the uniqueness of dentition, better known as the study of teeth.
Forensic optometry is the study of glasses and other eyewear relating to crime scenes and criminal investigations.
Forensic pathology is a field in which the principles of medicine and pathology are applied to determine a cause of death or injury in the context of a legal inquiry.
Forensic podiatry is an application of the study of feet footprint or footwear and their traces to analyze scene of crime and to establish personal identity in forensic examinations.
Forensic psychiatry is a specialized branch of psychiatry as applied to and based on scientific criminology.
Forensic psychology is the study of the mind of an individual, using forensic methods. Usually it determines the circumstances behind a criminal's behavior.
Forensic seismology is the study of techniques to distinguish the seismic signals generated by underground nuclear explosions from those generated by earthquakes.
Forensic serology is the study of the body fluids.
Forensic social work is the specialist study of social work theories and their applications to a clinical, criminal justice or psychiatric setting. Practitioners of forensic social work connected with the criminal justice system are often termed Social Supervisors, whilst the remaining use the interchangeable titles forensic social worker, approved mental health professional or forensic practitioner and they conduct specialist assessments of risk, care planning and act as an officer of the court.
Forensic toxicology is the study of the effect of drugs and poisons on/in the human body.
Forensic video analysis is the scientific examination, comparison and evaluation of video in legal matters.
Mobile device forensics is the scientific examination and evaluation of evidence found in mobile phones, e.g. Call History and Deleted SMS, and includes SIM Card Forensics.
Trace evidence analysis is the analysis and comparison of trace evidence including glass, paint, fibres and hair (e.g., using micro-spectrophotometry).
Wildlife forensic science applies a range of scientific disciplines to legal cases involving non-human biological evidence, to solve crimes such as poaching, animal abuse, and trade in endangered species.
== Questionable techniques ==
Some forensic techniques, believed to be scientifically sound at the time they were used, have turned out later to have much less scientific merit or none. Some such techniques include:
Comparative bullet-lead analysis was used by the FBI for over four decades, starting with the John F. Kennedy assassination in 1963. The theory was that each batch of ammunition possessed a chemical makeup so distinct that a bullet could be traced back to a particular batch or even a specific box. Internal studies and an outside study by the National Academy of Sciences found that the technique was unreliable due to improper interpretation, and the FBI abandoned the test in 2005.
Forensic dentistry has come under fire: in at least three cases bite-mark evidence has been used to convict people of murder who were later freed by DNA evidence. A 1999 study by a member of the American Board of Forensic Odontology found a 63 percent rate of false identifications and is commonly referenced within online news stories and conspiracy websites. The study was based on an informal workshop during an ABFO meeting, which many members did not consider a valid scientific setting. The theory is that each person has a unique and distinctive set of teeth, which leave a pattern after biting someone. They analyze the dental characteristics such as size, shape, and arch form.
Police Access to Genetic Genealogy Databases: There are privacy concerns with the police being able to access personal genetic data that is on genealogy services. Individuals can become criminal informants to their own families or to themselves simply by participating in genetic genealogy databases. The Combined DNA Index System (CODIS) is a database that the FBI uses to hold genetic profiles of all known felons, misdemeanants, and arrestees. Some people argue that individuals who are using genealogy databases should have an expectation of privacy in their data that is or may be violated by genetic searches by law enforcement. These different services have warning signs about potential third parties using their information, but most individuals do not read the agreement thoroughly. According to a study by Christi Guerrini, Jill Robinson, Devan Petersen, and Amy McGuire, they found that the majority of the people who took the survey support police searches of genetic websites that identify genetic relatives. People who responded to the survey are more supportive of police activities using genetic genealogy when it is for the purpose of identifying offenders of violent crimes, suspects of crimes against children or missing people. The data from the surveys that were given show that individuals are not concerned about police searches using personal genetic data if it is justified. It was found in this study that offenders are disproportionally low-income and black and the average person of genetic testing is wealthy and white. The results from the study had different results. In 2016, there was a survey called the National Crime Victimization Survey (NCVS) that was provided by the US Bureau of Justice Statistics. In that survey, it was found that 1.3% of people aged 12 or older were victims of violent crimes, and 8.85 of households were victims of property crimes. There were some issues with this survey though. The NCVS produces only the annual estimates of victimization. The survey that Christi Guerrini, Jill Robinson, Devan Petersen, and Amy McGuire produced asked the participants about the incidents of victimization over one's lifetime. Their survey also did not restrict other family members to one household. Around 25% of people who responded to the survey said that they have had family members that have been employed by law enforcement which includes security guards and bailiffs. Throughout these surveys, it has been found that there is public support for law enforcement to access genetic genealogy databases.
== Litigation science ==
"Litigation science" describes analysis or data developed or produced expressly for use in a trial versus those produced in the course of independent research. This distinction was made by the U.S. 9th Circuit Court of Appeals when evaluating the admissibility of experts.
This uses demonstrative evidence, which is evidence created in preparation of trial by attorneys or paralegals.
== Demographics ==
As of 2025, there are currently an estimated 18,500 forensic science technicians in the United States.
== Media impact ==
Real-life crime scene investigators and forensic scientists warn that popular television shows do not give a realistic picture of the work, often wildly distorting its nature, and exaggerating the ease, speed, effectiveness, drama, glamour, influence and comfort level of their jobs—which they describe as far more mundane, tedious and boring.
Some claim these modern TV shows have changed individuals' expectations of forensic science, sometimes unrealistically—an influence termed the "CSI effect".
Further, research has suggested that public misperceptions about criminal forensics can create, in the mind of a juror, unrealistic expectations of forensic evidence—which they expect to see before convicting—implicitly biasing the juror towards the defendant. Citing the "CSI effect," at least one researcher has suggested screening jurors for their level of influence from such TV programs.
Further, research has shown that newspaper media has been found to shape readers general knowledge and perceptions of science and technology in a rather positive way. It could lead to support of it due to the interest readers may obtain and seek further knowledge on the topic.
== Controversies ==
Questions about certain areas of forensic science, such as fingerprint evidence and the assumptions behind these disciplines have been brought to light in some publications including the New York Post. The article stated that "No one has proved even the basic assumption: That everyone's fingerprint is unique." The article also stated that "Now such assumptions are being questioned—and with it may come a radical change in how forensic science is used by police departments and prosecutors." Law professor Jessica Gabel said on NOVA that forensic science "lacks the rigors, the standards, the quality controls and procedures that we find, usually, in science".
The National Institute of Standards and Technology has reviewed the scientific foundations of bite-mark analysis used in forensic science. Bite mark analysis is a forensic science technique that analyzes the marks on the victim's skin compared to the suspects teeth. NIST reviewed the findings of the National Academies of Sciences, Engineering, and Medicine 2009 study. The National Academics of Sciences, Engineering, and Medicine conducted research to address the issues of reliability, accuracy, and reliability of bitemark analysis, where they concluded that there is a lack of sufficient scientific foundation to support the data. Yet the technique is still legal to use in court as evidence. NIST funded a 2019 meeting that consisted of dentists, lawyers, researchers and others to address the gaps in this field.
In the US, on 25 June 2009, the Supreme Court issued a 5-to-4 decision in Melendez-Diaz v. Massachusetts stating that crime laboratory reports may not be used against criminal defendants at trial unless the analysts responsible for creating them give testimony and subject themselves to cross-examination. The Supreme Court cited the National Academies of Sciences report Strengthening Forensic Science in the United States in their decision. Writing for the majority, Justice Antonin Scalia referred to the National Research Council report in his assertion that "Forensic evidence is not uniquely immune from the risk of manipulation."
In the US, another area of forensic science that has come under question in recent years is the lack of laws requiring the accreditation of forensic labs. Some states require accreditation, but some states do not. Because of this, many labs have been caught performing very poor work resulting in false convictions or acquittals. For example, it was discovered after an audit of the Houston Police Department in 2002 that the lab had fabricated evidence which led George Rodriguez being convicted of raping a fourteen-year-old girl. The former director of the lab, when asked, said that the total number of cases that could have been contaminated by improper work could be in the range of 5,000 to 10,000.
The Innocence Project database of DNA exonerations shows that many wrongful convictions contained forensic science errors. According to the Innocence project and the US Department of Justice, forensic science has contributed to about 39 percent to 46 percent of wrongful convictions. As indicated by the National Academy of Sciences report Strengthening Forensic Sciences in the United States, part of the problem is that many traditional forensic sciences have never been empirically validated; and part of the problem is that all examiners are subject to forensic confirmation biases and should be shielded from contextual information not relevant to the judgment they make.
Many studies have discovered a difference in rape-related injuries reporting based on race, with white victims reporting a higher frequency of injuries than black victims. However, since current forensic examination techniques may not be sensitive to all injuries across a range of skin colors, more research needs to be conducted to understand if this trend is due to skin confounding healthcare providers when examining injuries or if darker skin extends a protective element. In clinical practice, for patients with darker skin, one study recommends that attention must be paid to the thighs, labia majora, posterior fourchette and fossa navicularis, so that no rape-related injuries are missed upon close examination.
== Forensic science and humanitarian work ==
The International Committee of the Red Cross (ICRC) uses forensic science for humanitarian purposes to clarify the fate of missing persons after armed conflict, disasters or migration, and is one of the services related to Restoring Family Links and Missing Persons. Knowing what has happened to a missing relative can often make it easier to proceed with the grieving process and move on with life for families of missing persons.
Forensic science is used by various other organizations to clarify the fate and whereabouts of persons who have gone missing. Examples include the NGO Argentine Forensic Anthropology Team, working to clarify the fate of people who disappeared during the period of the 1976–1983 military dictatorship. The International Commission on Missing Persons (ICMP) used forensic science to find missing persons, for example after the conflicts in the Balkans.
Recognising the role of forensic science for humanitarian purposes, as well as the importance of forensic investigations in fulfilling the state's responsibilities to investigate human rights violations, a group of experts in the late-1980s devised a UN Manual on the Prevention and Investigation of Extra-Legal, Arbitrary and Summary Executions, which became known as the Minnesota Protocol. This document was revised and re-published by the Office of the High Commissioner for Human Rights in 2016.
== See also ==
Association of Firearm and Tool Mark Examiners – International non-profit organization
Canadian Identification Society
Computer forensics – Branch of digital forensic science
Crime science – study of crime in order to find ways to prevent itPages displaying wikidata descriptions as a fallback
Diplomatics – Academic study of the protocols of documents (forensic paleography)
Epigenetics in forensic science – Overview article
Evidence packaging – Specialized packaging for physical evidence
Forensic biology – Forensic application of the study of biology
Forensic economics
Forensic identification – Legal identification of specific objects and materials
Forensic materials engineering – branch of forensic engineeringPages displaying wikidata descriptions as a fallback
Forensic photography – Art of producing an accurate reproduction of a crime scene
Forensic polymer engineering – Study of failure in polymeric products
Forensic profiling – Study of trace evidence in criminal investigations
Glove prints – Mark left on a surface by a worn glove
History of forensic photography
International Association for Identification
Marine forensics – legal issues of marine lifePages displaying wikidata descriptions as a fallback
Outline of forensic science – Overview of and topical guide to forensic science
Profiling (information science) – Process of construction and application of user profiles generated by computerized data analysis
Retrospective diagnosis – Practice of identifying an illness after the death of the patient
Rapid Stain Identification Series (RSID)
Scenes of crime officer – Officer who gathers forensic evidence for the British police
Skid mark – Mark left by any solid which moves against another
University of Florida forensic science distance education program
== References ==
== Bibliography ==
== External links ==
Media related to Forensic science at Wikimedia Commons
Forensic educational resources
Dunning, Brian (1 March 2022). "Skeptoid #821: Forensic (Pseudo) Science". Skeptoid. Retrieved 15 May 2022. | Wikipedia/Forensic_science |
In electronics, a split-pi topology is a pattern of component interconnections used in a kind of power converter that can theoretically produce an arbitrary output voltage, either higher or lower than the input voltage. In practice the upper voltage output is limited to the voltage rating of components used. It is essentially a boost (step-up) converter followed by a buck (step-down) converter. The topology and use of MOSFETs make it inherently bi-directional which lends itself to applications requiring regenerative braking.
The split-pi converter is a type of DC-to-DC converter that has an output voltage magnitude either greater than or less than the input voltage magnitude. It is a switched-mode power supply with a similar circuit configuration to a boost converter followed by a buck converter. Split-pi gets its name from the pi circuit due to the use of two pi filters in series and split with the switching MOSFET bridges.
Other DC–DC converter topologies that can produce output voltage magnitude either greater than or less than the input voltage magnitude include the boost-buck converter topologies (the split-pi, the Ćuk converter, the SEPIC, etc.) and the buck–boost converter topologies.
== Principle of operation ==
In typical operation where a source voltage is located at the left-hand side input terminals, the left-hand bridge operates as a boost converter and the right-hand bridge operates as a buck converter. In regenerative mode, the reverse is true with the left-hand bridge operating as a buck converter and the right as the boost converter.
Only one bridge switches at any time to provide voltage conversion, with the unswitched bridge's top switch always switched on. A straight through 1:1 voltage output is achieved with the top switch of each bridge switch on and the bottom switches off. The output voltage is adjustable based on the duty cycle of the switching MOSFET bridge.
== Applications ==
Electric drivetrain
Motor control
Battery balancing
Regenerative braking
== References ==
British Patent GB2376357B - Power converter and method for power conversion
Restrepo, C.; et al. (2011). "A Noninverting Buck–Boost DC–DC Switching Converter With High Efficiency and Wide Bandwidth". IEEE Transactions on Power Electronics. 26 (9): 2490–2503. doi:10.1109/TPEL.2011.2172226. | Wikipedia/Split-pi_topology |
A 2.5D integrated circuit (2.5D IC) is an advanced packaging technique that combines multiple integrated circuit dies in a single package without stacking them into a three-dimensional integrated circuit (3D-IC) with through-silicon vias (TSVs). The term "2.5D" originated when 3D-ICs with TSVs were quite new and still very difficult. Chip designers realized that many of the advantages of 3D integration could be approximated by placing bare dies side by side on an interposer instead of stacking them vertically. If the pitch is very fine and the interconnect very short, the assembly can be packaged as a single component with better size, weight, and power characteristics than a comparable 2D circuit board assembly. This half-way 3D integration was facetiously named "2.5D" and the name stuck.
Since then, 2.5D has proven to be far more than just "half-way to 3D."
Some benefits:
An interposer can support heterogeneous integration – that is, dies of different pitch, size, material, and process node.
Placing dies side by side instead of stacking them reduces heat buildup.
Upgrading or modifying a 2.5D assembly is as easy as swapping in a new component and revamping the interposer to suit; much faster and simpler than reworking an entire 3D-IC or System-on-Chip (SoC).
Some sophisticated 2.5D assemblies even incorporate TSVs and 3D components. Several foundries now support 2.5D packaging.
The success of 2.5D assembly has given rise to "chiplets" – small, functional circuit blocks designed to be combined in mix-and-match fashion on interposers. Several high-end products already take advantage of these LEGO-style chiplets; some experts predict the emergence of an industry-wide chiplet ecosystem. Interposers can be larger than the reticle size which is the maximum area that can be projected by a photolithography scanner or stepper.
== References == | Wikipedia/2.5D_integrated_circuit |
The Strategy of Technology doctrine involves a country using its advantage in technology to create and deploy weapons of sufficient power and numbers so as to overawe or beggar its opponents, forcing them to spend their limited resources on developing hi-tech countermeasures and straining their economy.
In 1983, The US Defense Intelligence Agency established a classified program, Project Socrates, to develop a national technology strategy policy. This program was designed to maintain the US military strength relative to the Soviet Union, while also maintaining the economic and military strength required to keep the US as a superpower.
The Strategy of Technology is described in the eponymous book written by Stefan T. Possony, Jerry Pournelle and Francis X. Kane (Col., USAF, and ret.) in 1970. This was required reading in the U.S. service academies, the Air War College, and the National War College during the latter half of the Cold War.
== Cold War ==
The classic example of the successful deployment of this strategy was the nuclear build-up between the U.S. and U.S.S.R. during the Cold War.
Some observers believe that the Vietnam War was a necessary attritive component to this war — Soviet industrial capacity was diverted to conventional arms in North Vietnam, rather than development of new weapons and nuclear weapons — but evidence would need to be found that the then-current administration of the US saw it thus. Current consensus and evidence holds that it was but a failed defensive move in the Cold War, in the context of the Domino Doctrine.
The coup-de-grace is variously opined to be Stealth technology especially as embodied in the cruise missile, which would have required an unattainable number of installations to secure the Soviet border; the Gulf War, which proved stealth and easily overcame Soviet-doctrine Iraqi forces; or Ronald Reagan's Strategic Defense Initiative, a clear attempt to obsolesce the Soviet nuclear arsenal, creating an immense expense for the Soviets to maintain parity.
== Opposing views and controversies ==
It is argued that the strategy was not a great success in the Cold War; that the Soviet Union did little to try to keep up with the SDI system, and that the War in Afghanistan caused a far greater drain on Soviet resources. However, the Soviets spent a colossal amount of money on their Buran space shuttle in an attempt to compete with a perceived military threat from the American Space Shuttle program, which was to be used in the SDI.
There is a further consideration. It is not seriously in doubt that despite the excellent education and training of Soviet technologists and scientists, it was the nations of Europe and North America, in particular the United States, which made most of the running in technical development.
The Soviet Union did have some extraordinary technical breakthroughs of their own. For example: the 15% efficiency advantage of Soviet rocket engines which used exhaust gases to power the fuel pumps, or the VA-111 Shkval supersonic cavitation torpedo. It was also able to use both its superlative espionage arm and the inherent ability of central planning to concentrate resources to great effect.
But the United States found a way to use its opponent's strengths for its own purposes. In the late 1990s, it emerged that many stolen technological secrets were funnelled by an arm of American intelligence to the Soviet Union. The documents were real. They were of versions of the product which contained a critical but not obvious flaw.
Such was the complexity and depth of the stolen secrets that to check them, would have required an effort almost as great as developing a similar product from scratch. Such an effort was possible in nations of the West because the cost could be defrayed by commercial sales. In Soviet states this was not an option. This sort of technological jiu-jitsu may set the pattern of future engagements.
== References ==
== External links ==
The Strategy of Technology by Stefan T. Possony, Ph.D.; Jerry E. Pournelle, Ph.D. and Francis X. Kane, Ph.D. (Col., USAF Ret.) [The full text, free, with a suggested contribution.]
How relevant was U.S. strategy in winning the Cold War?, banquet address by John Lewis Gaddis. | Wikipedia/Strategy_of_Technology |
The technology acceptance model (TAM) is an information systems theory that models how users come to accept and use a technology.
The actual system use is the end-point where people use the technology. Behavioral intention is a factor that leads people to use the technology. The behavioral intention (BI) is influenced by the attitude (A) which is the general impression of the technology.
The model suggests that when users are presented with a new technology, a number of factors influence their decision about how and when they will use it, notably:
Perceived usefulness (PU) – This was defined by Fred Davis as "the degree to which a person believes that using a particular system would enhance their job performance". It means whether or not someone perceives that technology to be useful for what they want to do.
Perceived ease-of-use (PEOU) – Davis defined this as "the degree to which a person believes that using a particular system would be free from effort". If the technology is easy to use, then the barrier is conquered. If it's not easy to use and the interface is complicated, no one has a positive attitude towards it.
External variables such as social influence is an important factor to determine the attitude. When these things (TAM) are in place, people will have the attitude and intention to use the technology. However, the perception may change depending on age and gender because everyone is different.
The TAM has been continuously studied and expanded—the two major upgrades being the TAM 2 and the unified theory of acceptance and use of technology (or UTAUT). A TAM 3 has also been proposed in the context of e-commerce with an inclusion of the effects of trust and perceived risk on system use.
== Background ==
TAM is one of the most influential extensions of Ajzen and Fishbein's theory of reasoned action (TRA) in the literature. Davis's technology acceptance model (Davis, 1989; Davis, Bagozzi, & Warshaw, 1989)
is the most widely applied model of users' acceptance and usage of technology
(Venkatesh, 2000). It was developed by Fred Davis and Richard Bagozzi. TAM replaces many of TRA's attitude measures with the two technology acceptance measures—ease of use, and usefulness. TRA and TAM, both of which have strong behavioural elements, assume that when someone forms an intention to act, that they will be free to act without limitation. In the real world there will be many constraints, such as limited freedom to act.
Bagozzi, Davis and Warshaw say:
Because new technologies such as personal computers are complex and an element of uncertainty exists in the minds of decision makers with respect to the successful adoption of them, people form attitudes and intentions toward trying to learn to use the new technology prior to initiating efforts directed at using. Attitudes towards usage and intentions to use may be ill-formed or lacking in conviction or else may occur only after preliminary strivings to learn to use the technology evolve. Thus, actual usage may not be a direct or immediate consequence of such attitudes and intentions.
Earlier research on the diffusion of innovations also suggested a prominent role for perceived ease of use. Tornatzky and Klein analysed the adoption, finding that compatibility, relative advantage, and complexity had the most significant relationships with adoption across a broad range of innovation types. Eason studied perceived usefulness in terms of a fit between systems, tasks and job profiles, using the terms "task fit" to describe the metric. Legris, Ingham and Collerette suggest that TAM must be extended to include variables that account for change processes and that this could be achieved through adoption of the innovation model into TAM.
== Usage ==
Several researchers have replicated Davis's original study to provide empirical evidence on the relationships that exist between usefulness, ease of use and system use. Much attention has focused on testing the robustness and validity of the questionnaire instrument used by Davis. Adams et al. replicated the work of Davis to demonstrate the validity and reliability of his instrument and his measurement scales. They also extended it to different settings and, using two different samples, they demonstrated the internal consistency and replication reliability of the two scales. Hendrickson et al. found high reliability and good test-retest reliability. Szajna found that the instrument had predictive validity for intent to use, self-reported usage and attitude toward use. The sum of this research has confirmed the validity of the Davis instrument, and to support its use with different populations of users and different software choices.
Segars and Grover re-examined Adams et al.'s)replication of the Davis work. They were critical of the measurement model used, and postulated a different model based on three constructs: usefulness, effectiveness, and ease-of-use. These findings do not yet seem to have been replicated. However, some aspects of these findings were tested and supported by Workman by separating the dependent variable into information use versus technology use.
Mark Keil and his colleagues have developed (or, perhaps rendered more popularisable) Davis's model into what they call the Usefulness/EOU Grid, which is a 2×2 grid where each quadrant represents a different combination of the two attributes. In the context of software use, this provides a mechanism for discussing the current mix of usefulness and EOU for particular software packages, and for plotting a different course if a different mix is desired, such as the introduction of even more powerful software.
The TAM model has been used in most technological and geographic contexts. One of these contexts is health care, which is growing rapidly
Saravanos et al. extended the TAM model to incorporate emotion and the effect that may play on the behavioral intention to accept a technology. Specifically, they looked at warm-glow.
Venkatesh and Davis extended the original TAM model to explain perceived usefulness and usage intentions in terms of social influence (subjective norms, voluntariness, image) and cognitive instrumental processes (job relevance, output quality, result demonstrability, perceived ease of use). The extended model, referred to as TAM2, was tested in both voluntary and mandatory settings. The results strongly supported TAM2.
Subjective norm – An individual's perception that other individuals who are important to him/her/them consider if he/she/they could perform a behavior. This was consistent with the theory of reasoned action (TRA).
Voluntariness – This was defined by Venkatesh & Davis as "extent to which potential adopters perceive the adoption decision to be non-mandatory".
Image – This was defined by Moore & Benbasat as "the degree to which use of an innovation perceived to enhance one's status in one's social system".
Job relevance – Venkatesh & Davis defined this as personal perspective on the extent to which the target system is suitable for the job.
Output quality – Venkatesh & Davis defined this as personal perception of the system's ability to perform specific tasks.
Result demonstrability – The production of tangible results will directly influence the system's usefulness.
In an attempt to integrate the main competing user acceptance models, Venkatesh et al. formulated the unified theory of acceptance and use of technology (UTAUT). This model was found to outperform each of the individual models (Adjusted R square of 69 percent). UTAUT has been adopted by some recent studies in healthcare.
In addition, authors Jun et al. also think that the technology acceptance model is essential to analyze the factors affecting customers’ behavior towards online food delivery services. It is also a widely adopted theoretical model to demonstrate the acceptance of new technology fields. The foundation of TAM is a series of concepts that clarifies and predicts people’s behaviors with their beliefs, attitudes, and behavioral intention. In TAM, perceived ease of use and perceived usefulness, considered general beliefs, play a more vital role than salient beliefs in attitudes toward utilizing a particular technology.
== Alternative models ==
The MPT model: Independent of TAM, Scherer developed the matching person and technology model in 1986 as part of her National Science Foundation-funded dissertation research. The MPT model is fully described in her 1993 text, "Living in the State of Stuck", now in its 4th edition. The MPT model has accompanying assessment measures used in technology selection and decision-making, as well as outcomes research on differences among technology users, non-users, avoiders, and reluctant users.
The HMSAM: TAM has been effective for explaining many kinds of systems use (i.e. e-learning, learning management systems, webportals, etc.) (Fathema, Shannon, Ross, 2015; Fathema, Ross, Witte, 2014). However, TAM is not ideally suited to explain adoption of purely intrinsic or hedonic systems (e.g., online games, music, learning for pleasure). Thus, an alternative model to TAM, called the hedonic-motivation system adoption model (HMSAM) was proposed for these kinds of systems by Lowry et al. HMSAM is designed to improve the understanding of hedonic-motivation systems (HMS) adoption. HMS are systems used primarily to fulfill users' intrinsic motivations, such for online gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, only pornography, gamified systems, and for general gamification. Instead of a minor TAM extension, HMSAM is an HMS-specific system acceptance model based on an alternative theoretical perspective, which is in turn grounded in flow-based cognitive absorption (CA). HMSAM may be especially useful in understanding gamification elements of systems use.
Extended TAM: Several studies proposed extension of original TAM (Davis, 1989) by adding external variables in it with an aim of exploring the effects of external factors on users' attitude, behavioral intention and actual use of technology. Several factors have been examined so far. For example, perceived self-efficacy, facilitating conditions, and systems quality (Fathema, Shannon, Ross, 2015, Fathema, Ross, Witte, 2014). This model has also been applied in the acceptance of health care technologies.
== Criticisms ==
TAM has been widely criticised, despite its frequent use, leading the original proposers to attempt to redefine it several times. Criticisms of TAM as a "theory" include its questionable heuristic value, limited explanatory and predictive power, triviality, and lack of any practical value. Benbasat and Barki suggest that TAM "has diverted researchers' attention away from other important research issues and has created an illusion of progress in knowledge accumulation. Furthermore, the independent attempts by several researchers to expand TAM in order to adapt it to the constantly changing IT environments has lead [sic] to a state of theoretical chaos and confusion". In general, TAM focuses on the individual 'user' of a computer, with the concept of 'perceived usefulness', with extension to bring in more and more factors to explain how a user 'perceives' 'usefulness', and ignores the essentially social processes of IS development and implementation, without question where more technology is actually better, and the social consequences of IS use. Lunceford argues that the framework of perceived usefulness and ease of use overlooks other issues, such as cost and structural imperatives that force users into adopting the technology. For a recent analysis and critique of TAM, see Bagozzi.
Legris et al. claim that, together, TAM and TAM2 account for only 40% of a technological system's use.
Perceived ease of use is less likely to be a determinant of attitude and usage intention according to studies of telemedicine, mobile commerce,) and online banking.
== See also ==
== Notes ==
== References == | Wikipedia/Technology_acceptance_model |
The 4000 series is a CMOS logic family of integrated circuits (ICs) first introduced in 1968 by RCA. It was slowly migrated into the 4000B buffered series after about 1975. It had a much wider supply voltage range than any contemporary logic family (3V to 18V recommended range for "B" series). Almost all IC manufacturers active during this initial era fabricated models for this series. Its naming convention is still in use today.
== History ==
The 4000 series was introduced as the CD4000 COS/MOS series in 1968 by RCA as a lower power and more versatile alternative to the 7400 series of transistor-transistor logic (TTL) chips. The logic functions were implemented with the newly introduced Complementary Metal–Oxide–Semiconductor (CMOS) technology. While initially marketed with "COS/MOS" labeling by RCA (which stood for Complementary Symmetry Metal-Oxide Semiconductor), the shorter CMOS terminology emerged as the industry preference to refer to the technology. The first chips in the series were designed by a group led by Albert Medwin.
Wide adoption was initially hindered by the comparatively lower speeds of the designs compared to TTL based designs. Speed limitations were eventually overcome with newer fabrication methods (such as self aligned gates of polysilicon instead of metal). These CMOS variants performed on par with contemporary TTL. The series was extended in the late 1970s and 1980s with new models that were given 45xx and 45xxx designations, but are usually still regarded by engineers as part of the 4000 series. In the 1990s, some manufacturers (e.g. Texas Instruments) ported the 4000 series to newer HCMOS based designs to provide greater speeds.
== Design considerations ==
The 4000 series facilitates simpler circuit design through relatively low power consumption, a wide range of supply voltages, and vastly increased load-driving capability (fanout) compared to TTL. This makes the series ideal for use in prototyping LSI designs. While TTL ICs are similarly modular, these usually lack the symmetrical drive strength of CMOS and may therefore require more consideration of the loads applied on its outputs.
Just like with TTL, buffered models can drive higher electrical current (mainly available for I/O-devices like octal latches and three-state drivers) but have a slightly higher risk of introducing ringing (transient oscillations) unless correctly damped or terminated. Many models contain a high level of integration, including fully integrated 7-segment display counters, walking ring counters, and full adders.
== Common chips ==
Logic gates
Flip-flops
4013 – Dual D-type flip-flop. Each flip-flop has independent data, Q, /Q, clock, reset, set.
40174 – Hex D-type flip-flop. Each flip-flop has independent data and Q. All share clock and reset.
40175 – Quad D-type flip-flop. Each flip-flop has independent data, Q, /Q. All share clock and reset.
Counters
4017 – Decade counter with 10-output decoder.
4026 – Decade counter with 7-segment digit decoded output.
40110 – Up/down decade counter with 7-segment display decoder with 25 mA output drivers.
40192 – Up/down decade counter with 4-bit BCD preset.
40193 – Up/down binary counter with 4-bit binary preset.
Decoders
4028 – 4-bit BCD to 10-output decoder (can be used as 3-bit binary to 8-output decoder)
4511 – 4-bit BCD to 7-segment display decoder with 25 mA output drivers.
Timers
4047 – Monostable/astable multivibrator with external RC oscillator.
4060 – 14-bit ripple counter with external RC or crystal oscillator (long duration) (schmitt-trigger inputs) (can be used with 32.768 kHz crystal)
4541 – 16-bit ripple counter with external RC oscillator (long duration).
Analog
4051 – Single 8-channel analog mux.
4066 – Quad SPST analog switch.
== See also ==
== References ==
== Further reading ==
Periodicals
New low-voltage COS/MOS IC's (CD4000A); RCA Engineer; Vol 17 No 1; June 1971 to July 1971; Pages 40–45.
15 articles about COS/MOS IC's; RCA Engineer; Vol 18 No 4; December 1972 to January 1973.
Books
CMOS Cookbook; 4th Ed; Don Lancaster, Howard Berlin; Elsevier; 512 pages; 2019; ISBN 978-0672213984. (archive)
CMOS Sourcebook; 1st Ed; Newton Braga; Prompt Press; 390 pages; 2001; ISBN 978-0790612348.
Understanding CMOS Integrated Circuits; 2nd Ed; Roger Melen and Harry Garland; Sams Publishing; 144 pages; 1979; ISBN 978-0672215988. (archive)
Second Book of CMOS IC Projects; 1st Ed; R.A. Penfold; Bernard Babani Publishing; 127 pages; 1979; ISBN 978-0900162787. (archive)
50 CMOS IC Projects; 1st Ed; R.A. Penfold; Bernards Publishing; 112 pages; 1977; ISBN 978-0900162640. (archive)
Historical Documents
Signetics HE4000B Family Specifications; IC04; 13 pages; 1995.
RCA COS/MOS IC Manual; CMS-272; 170 pages; 1979.
RCA COS/MOS IC Manual; CMS-270; TBD pages; 1971.
Historical Databooks
RCA CMOS Databook; SSD-250C; 798 pages; 1983.
RCA COS/MOS Databook; SSD-203C; 649 pages; 1975.
Motorola CMOS Databook; DL131; 560 pages; 1988.
National CMOS Databook; 930 pages; 1981.
ST HCC40xxx Rad-Hard Logic Family; 16 pages; 2020. (for high-reliability and space applications)
== External links ==
Understanding 4000-series digital logic ICs – Nuts and Volts magazine
Thorough list of 4000-series ICs – Electronics Club
4000-series logic and analog circuitry – Analog Devices | Wikipedia/4000-series_integrated_circuits |
Potential graphene applications include lightweight, thin, and flexible electric/photonics circuits, solar cells, and various medical, chemical and industrial processes enhanced or enabled by the use of new graphene materials, and favoured by massive cost decreases in graphene production.
== Medicine ==
Researchers in 2011 discovered the ability of graphene to accelerate the osteogenic differentiation of human mesenchymal stem cells without the use of biochemical inducers.
In 2015 researchers used graphene to create biosensors with epitaxial graphene on silicon carbide. The sensors bind to 8-hydroxydeoxyguanosine (8-OHdG) and is capable of selective binding with antibodies. The presence of 8-OHdG in blood, urine and saliva is commonly associated with DNA damage. Elevated levels of 8-OHdG have been linked to increased risk of several cancers. By the next year, a commercial version of a graphene biosensor was being used by biology researchers as a protein binding sensor platform.
In 2016 researchers revealed that uncoated graphene can be used as neuro-interface electrode without altering or damaging properties such as signal strength or formation of scar tissue. Graphene electrodes in the body are significantly more stable than electrodes of tungsten or silicon because of properties such as flexibility, bio-compatibility and conductivity.
=== Tissue engineering ===
Graphene has been investigated for tissue engineering. It has been used as a reinforcing agent to improve the mechanical properties of biodegradable polymeric nanocomposites for engineering bone tissue applications. Dispersion of low weight % of graphene (≈0.02 wt.%) increased in compressive and flexural mechanical properties of polymeric nanocomposites. The addition of graphene nanoparticles in the polymer matrix lead to improvements in the crosslinking density of the nanocomposite and better load transfer from the polymer matrix to the underlying nanomaterial thereby increasing the mechanical properties.
=== Contrast agents, bioimaging ===
Functionalized and surfactant dispersed graphene solutions have been designed as blood pool MRI contrast agents. Further, iodine and manganese incorporating graphene nanoparticles have served as multimodal MRI-computerized tomograph (CT) contrast agents. Graphene micro- and nano-particles have served as contrast agents for photoacoustic and thermoacoustic tomography. Graphene has also been reported to be efficiently taking up cancerous cells thereby enabling the design of drug delivery agents for cancer therapy. Graphene nanoparticles of various morphologies such as graphene nanoribbons, graphene nanoplatelets and graphene nanoonions are non-toxic at low concentrations and do not alter stem cell differentiation suggesting that they may be safe to use for biomedical applications.
=== Polymerase chain reaction ===
Graphene is reported to have enhanced PCR by increasing the yield of DNA product. Experiments revealed that graphene's thermal conductivity could be the main factor behind this result. Graphene yields DNA product equivalent to positive control with up to 65% reduction in PCR cycles.
=== Devices ===
Graphene's modifiable chemistry, large surface area per unit volume, atomic thickness and molecularly gateable structure make antibody-functionalized graphene sheets excellent candidates for mammalian and microbial detection and diagnosis devices. Graphene is so thin that water has near-perfect wetting transparency which is an important property particularly in developing bio-sensor applications. This means that a sensor coated in graphene has as much contact with an aqueous system as an uncoated sensor, while remaining protected mechanically from its environment.
Integration of graphene (thickness of 0.34 nm) layers as nanoelectrodes into a nanopore can potentially solve a bottleneck for nanopore-based single-molecule DNA sequencing.
On November 20, 2013, the Bill & Melinda Gates Foundation awarded $100,000 'to develop new elastic composite materials for condoms containing nanomaterials like graphene'.
In 2014, graphene-based, transparent (across infrared to ultraviolet frequencies), flexible, implantable medical sensor microarrays were announced that allow the viewing of brain tissue hidden by implants. Optical transparency was greater than 90%. Applications demonstrated include optogenetic activation of focal cortical areas, in vivo imaging of cortical vasculature via fluorescence microscopy and 3D optical coherence tomography.
=== Drug delivery ===
Researchers at Monash University discovered that a sheet of graphene oxide can be transformed into liquid crystal droplets spontaneously—like a polymer—simply by placing the material in a solution and manipulating the pH. The graphene droplets change their structure in the presence of an external magnetic field. This finding raises the possibility of carrying a drug in graphene droplets and releasing the drug upon reaching the targeted tissue by making the droplets change shape in a magnetic field. Another possible application is in disease detection if graphene is found to change shape at the presence of certain disease markers such as toxins.
A graphene 'flying carpet' was demonstrated to deliver two anti-cancer drugs sequentially to the lung tumor cells (A549 cell) in a mouse model. Doxorubicin (DOX) is embedded onto the graphene sheet, while the molecules of tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) are linked to the nanostructure via short peptide chains. Injected intravenously, the graphene strips with the drug payload preferentially concentrate to the cancer cells due to common blood vessel leakage around the tumor. Receptors on the cancer cell membrane bind TRAIL and cell surface enzymes clip the peptide thus release the drug onto the cell surface. Without the bulky TRAIL, the graphene strips with the embedded DOX are swallowed into the cells. The intracellular acidic environment promotes DOX's release from graphene. TRAIL on the cell surface triggers the apoptosis while DOX attacks the nucleus. These two drugs work synergistically and were found to be more effective than either drug alone.
The development of nanotechnology and molecular biology has provided the improvement of nanomaterials with specific properties which are now able to overcome the weaknesses of traditional disease diagnostic and therapeutic procedures. In recent years, more attention has been devoted to designing and the development of new methods for realizing sustained release of diverse drugs. Since each drug has a plasma level above which is toxic and below which is ineffective and in conventional drug delivery, the drug concentration in the blood rises quickly and then declines, the main aim of an ideal drug delivery system (DDS) is to maintain the drug within a desired therapeutic range after a single dose, and/or target the drug to a specific region while simultaneously lowering the systemic levels of the drug. Graphene–based materials such as graphene oxide (GO) have considerable potential for several biological applications including the development of new drug release system. GOs are an abundance of functional groups such as hydroxyl, epoxy, and carboxyl on its basal surface and edges that can be also used to immobilize or load various biomolecules for biomedical applications. On the other side, biopolymers have frequently been used as raw materials for designing drug delivery formulations owing to their excellent properties, such as non-toxicity, biocompatibility, biodegradability and environmental sensitivity, etc. Protein therapeutics possess advantages over small molecule approaches including high target specificity and low off target effects with normal biological processes. Human serum albumin (HSA) is one of the most abundant blood proteins. It serves as a transport protein for several endogenous and exogenous ligands as well as various drug molecules. HSA nanoparticles have long been the center of attention in the pharmaceutical industry due to their ability to bind to various drug molecules, high storage stability and in vivo application, non–toxicity and antigenicity, biodegradability, reproducibility, scale–up of the production process and a better control over release properties. In addition, significant amounts of drugs can be incorporated into the particle matrix because of the large number of drug binding sites on the albumin molecule. Therefore, the combination of HSA-NPs and GO-NSs could be useful for reducing the cytotoxicity of GO-NSs and the enhancement of drug loading and sustained drug release in cancer therapy.
=== Biomicrorobotics ===
Researchers demonstrated a nanoscale biomicrorobot (or cytobot) made by cladding a living endospore cell with graphene quantum dots. The device acted as a humidity sensor.
=== Testing ===
In 2014 a graphene based blood glucose testing product was announced.
=== Biosensors ===
Graphene based FRET biosensors can detect DNA and the unwinding of DNA using different probes.
=== Gene editing ===
Researchers at Binghamton University have developed a methodology to utilize graphene as a DNA polymerase buffer to facilitate direct manipulation of nucleotides.
== Electronics ==
Graphene has a high carrier mobility, and low noise, allowing it to be used as the channel in a field-effect transistor. Unmodified graphene does not have an energy band gap, making it unsuitable for digital electronics. However, modifications (e.g. Graphene nanoribbons) have created potential uses in various areas of electronics.
=== Transistors ===
Both chemically controlled and voltage controlled graphene transistors have been built.
Graphene-based transistors could be much thinner than modern silicon devices, allowing faster and smaller configurations.
Graphene exhibits a pronounced response to perpendicular external electric fields, potentially forming field-effect transistors (FET), but the absence of a band gap fundamentally limits its on-off conductance ratio to less than ~30 at room temperature. A 2006 paper proposed an all-graphene planar FET with side gates. Their devices showed changes of 2% at cryogenic temperatures. The first top-gated FET (on–off ratio of <2) was demonstrated in 2007. Graphene nanoribbons may prove generally capable of replacing silicon as a semiconductor.
A patent for graphene-based electronics was issued in 2006. In 2008, researchers at MIT Lincoln Lab produced hundreds of transistors on a single chip and in 2009, very high frequency transistors were produced at Hughes Research Laboratories.
A 2008 paper demonstrated a switching effect based on reversible chemical modification of the graphene layer that gives an on–off ratio of greater than six orders of magnitude. These reversible switches could potentially be employed in nonvolatile memories. IBM announced in December 2008 graphene transistors operating at GHz frequencies.
In 2009, researchers demonstrated four different types of logic gates, each composed of a single graphene transistor. In May 2009, an n-type transistor complemented the prior p-type graphene transistors. A functional graphene integrated circuit was demonstrated—a complementary inverter consisting of one p- and one n-type transistor. However, this inverter suffered from low voltage gain. Typically, the amplitude of the output signal is about 40 times less than that of the input signal. Moreover, none of these circuits operated at frequencies higher than 25 kHz.
In the same year, tight-binding numerical simulations demonstrated that the band-gap induced in graphene bilayer field effect transistors is not sufficiently large for high-performance transistors for digital applications, but can be sufficient for ultra-low voltage applications, when exploiting a tunnel-FET architecture.
In February 2010, researchers announced graphene transistors with an on-off rate of 100 gigahertz, far exceeding prior rates, and exceeding the speed of silicon transistors with an equal gate length. The 240 nm devices were made with conventional silicon-manufacturing equipment. According to a January 2010 report, graphene was epitaxially grown on SiC in a quantity and with quality suitable for mass production of integrated circuits. At high temperatures, the quantum Hall effect could be measured. IBM built 'processors' using 100 GHz transistors on 2-inch (51 mm) graphene sheets.
In June 2011, IBM researchers announced the first graphene-based wafer-scale integrated circuit, a broadband radio mixer. The circuit handled frequencies up to 10 GHz. Its performance was unaffected by temperatures up to 127 °C. In November researchers used 3D printing (additive manufacturing) to fabricate devices.
In 2013, researchers demonstrated graphene's high mobility in a detector that allows broad band frequency selectivity ranging from the THz to IR region (0.76–33 THz) A separate group created a terahertz-speed transistor with bistable characteristics, which means that the device can spontaneously switch between two electronic states. The device consists of two layers of graphene separated by an insulating layer of boron nitride a few atomic layers thick. Electrons move through this barrier by quantum tunneling. These new transistors exhibit negative differential conductance, whereby the same electric current flows at two different applied voltages. In June, an 8 transistor 1.28 GHz ring oscillator circuit was described.
The negative differential resistance experimentally observed in graphene field-effect transistors of conventional design allows for construction of viable non-Boolean computational architectures. The negative differential resistance—observed under certain biasing schemes—is an intrinsic property of graphene resulting from its symmetric band structure. The results present a conceptual change in graphene research and indicate an alternative route for graphene applications in information processing.
In 2013 researchers created transistors printed on flexible plastic that operate at 25 gigahertz, sufficient for communications circuits and that can be fabricated at scale. The researchers first fabricated non-graphene-containing structures—the electrodes and gates—on plastic sheets. Separately, they grew large graphene sheets on metal, then peeled them and transferred them to the plastic. Finally, they topped the sheet with a waterproof layer. The devices work after being soaked in water, and were flexible enough to be folded.
In 2015 researchers devised a digital switch by perforating a graphene sheet with boron-nitride nanotubes that exhibited a switching ratio of 105 at a turn-on voltage of 0.5 V. Density functional theory suggested that the behavior came from the mismatch of the density of states.
==== Single atom ====
In 2008, a one atom thick, 10 atoms wide transistor was made of graphene.
In 2022, researchers built a 0.34 nanometer (on state) single atom graphene transistor, smaller than a related device that used carbon nanotubes instead of graphene. The graphene formed the gate. Silicon dioxide was used as the base. The graphene sheet was formed via chemical vapor deposition, laid on top of the SiO2. A sheet of aluminum oxide was laid atop the graphene. The Al2Ox and SiO2 sandwiching the graphene act as insulators. They then etched into the sandwiched materials, cutting away the graphene and Al2Ox to create a step that exposed the edge of the graphene. They then added layers of hafnium oxide and molybdenum disulfide (another 2D material) to the top, side, and bottom of the step. Electrodes were then added to the top and bottom as source and drain. They call this construction a "sidewall transistor". The on/off ratio reached 1.02 × 105 and subthreshold swing values were 117 mV dec–1.
==== Trilayer ====
An electric field can change trilayer graphene's crystal structure, transforming its behavior from metal-like into semiconductor-like. A sharp metal scanning tunneling microscopy tip was able to move the domain border between the upper and lower graphene configurations. One side of the material behaves as a metal, while the other side behaves as a semiconductor. Trilayer graphene can be stacked in either Bernal or rhombohedral configurations, which can exist in a single flake. The two domains are separated by a precise boundary at which the middle layer is strained to accommodate the transition from one stacking pattern to the other.
Silicon transistors are either p-type or n-type, whereas graphene can operate as both. This lowers costs and is more versatile. The technique provides the basis for a field-effect transistor.
In trilayer graphene, the two stacking configurations exhibit different electronic properties. The region between them consists of a localized strain soliton where the carbon atoms of one graphene layer shift by the carbon–carbon bond distance. The free-energy difference between the two stacking configurations scales quadratically with electric field, favoring rhombohedral stacking as the electric field increases.
This ability to control the stacking order opens the way to new devices that combine structural and electrical properties.
=== Transparent conducting electrodes ===
Graphene's high electrical conductivity and high optical transparency make it a candidate for transparent conducting electrodes, required for such applications as touchscreens, liquid crystal displays, inorganic photovoltaics cells, organic photovoltaic cells, and organic light-emitting diodes. In particular, graphene's mechanical strength and flexibility are advantageous compared to indium tin oxide, which is brittle. Graphene films may be deposited from solution over large areas.
Large-area, continuous, transparent and highly conducting few-layered graphene films were produced by chemical vapor deposition and used as anodes for application in photovoltaic devices. A power conversion efficiency (PCE) up to 1.7% was demonstrated, which is 55.2% of the PCE of a control device based on indium tin oxide. However, the main disadvantage brought by the fabrication method will be the poor substrate bondings that will eventually lead to poor cyclic stability and cause high resistivity to the electrodes.
Organic light-emitting diodes (OLEDs) with graphene anodes have been demonstrated. The device was formed by solution-processed graphene on a quartz substrate. The electronic and optical performance of graphene-based devices are similar to devices made with indium tin oxide. In 2017 OLED electrodes were produced by CVD on a copper substrate.
A carbon-based device called a light-emitting electrochemical cell (LEC) was demonstrated with chemically-derived graphene as the cathode and the conductive polymer Poly(3,4-ethylenedioxythiophene) (PEDOT) as the anode. Unlike its predecessors, this device contains only carbon-based electrodes, with no metal.
In 2014 a prototype graphene-based flexible display was demonstrated.
In 2016 researchers demonstrated a display that used interferometry modulation to control colors, dubbed a "graphene balloon device" made of silicon containing 10 μm circular cavities covered by two graphene sheets. The degree of curvature of the sheets above each cavity defines the color emitted. The device exploits the phenomena known as Newton's rings created by interference between light waves bouncing off the bottom of the cavity and the (transparent) material. Increasing the distance between the silicon and the membrane increased the wavelength of the light. The approach is used in colored e-reader displays and smartwatches, such as the Qualcomm Toq. They use silicon materials instead of graphene. Graphene reduces power requirements.
=== Frequency multiplier ===
In 2009, researchers built experimental graphene frequency multipliers that take an incoming signal of a certain frequency and output a signal at a multiple of that frequency.
=== Optoelectronics ===
Graphene strongly interacts with photons, with the potential for direct band-gap creation. This is promising for optoelectronic and nanophotonic devices. Light interaction arises due to the Van Hove singularity. Graphene displays different time scales in response to photon interaction, ranging from femtoseconds (ultra-fast) to picoseconds. Potential uses include transparent films, touch screens and light emitters or as a plasmonic device that confines light and alters wavelengths.
=== Hall effect sensors ===
Due to extremely high electron mobility, graphene may be used for production of highly sensitive Hall effect sensors. Potential application of such sensors is connected with DC current transformers for special applications. New record high sensitive Hall sensors are reported in April 2015. These sensors are two times better than existing Si based sensors.
=== Quantum dots ===
Graphene quantum dots (GQDs) keep all dimensions less than 10 nm. Their size and edge crystallography govern their electrical, magnetic, optical, and chemical properties. GQDs can be produced via graphite nanotomy or via bottom-up, solution-based routes (Diels-Alder, cyclotrimerization and/or cyclodehydrogenation reactions). GQDs with controlled structure can be incorporated into applications in electronics, optoelectronics and electromagnetics. Quantum confinement can be created by changing the width of graphene nanoribbons (GNRs) at selected points along the ribbon. It is studied as a catalyst for fuel cells.
=== Organic electronics ===
A semiconducting polymer (poly(3-hexylthiophene) placed on top of single-layer graphene vertically conducts electric charge better than on a thin layer of silicon. A 50 nm thick polymer film conducted charge about 50 times better than a 10 nm thick film, potentially because the former consists of a mosaic of variably-oriented crystallites forms a continuous pathway of interconnected crystals. In a thin film or on silicon, plate-like crystallites are oriented parallel to the graphene layer. Uses include solar cells.
=== Spintronics ===
Large-area graphene created by chemical vapor deposition (CVD) and layered on a SiO2 substrate, can preserve electron spin over an extended period and communicate it. Spintronics varies electron spin rather than current flow. The spin signal is preserved in graphene channels that are up to 16 micrometers long over a nanosecond. Pure spin transport and precession extended over 16 μm channel lengths with a spin lifetime of 1.2 ns and a spin diffusion length of ≈6 μm at room temperature.
Spintronics is used in disk drives for data storage and in magnetic random-access memory. Electronic spin is generally short-lived and fragile, but the spin-based information in current devices needs to travel only a few nanometers. However, in processors, the information must cross several tens of micrometers with aligned spins. Graphene is the only known candidate for such behavior.
=== Conductive ink ===
In 2012 Vorbeck Materials started shipping the Siren anti-theft packaging device, which uses their graphene-based Vor-Ink circuitry to replace the metal antenna and external wiring to an RFID chip. This was the world's first commercially available product based on graphene.
== Light processing ==
=== Optical modulator ===
When the Fermi level of graphene is tuned, its optical absorption can be changed. In 2011, researchers reported the first graphene-based optical modulator. Operating at 1.2 GHz without a temperature controller, this modulator has a broad bandwidth (from 1.3 to 1.6 μm) and small footprint (~25 μm2).
A Mach-Zehnder modulator based on a hybrid graphene-silicon waveguide has been demonstrated recently, which can process signals nearly chirp-free. An extinction up to 34.7 dB and a minimum chirp parameter of -0.006 are obtained. Its insertion loss is roughly -1.37 dB.
=== Ultraviolet lens ===
A hyperlens is a real-time super-resolution lens that can transform evanescent waves into propagating waves and thus break the diffraction limit. In 2016 a hyperlens based on dielectric layered graphene and h-boron nitride (h-BN) can surpass metal designs. Based on its anisotropic properties, flat and cylindrical hyperlenses were numerically verified with layered graphene at 1200 THz and layered h-BN at 1400 THz, respectively. In 2016 a 1-nm thick graphene microlens that can image objects the size of a single bacterium. The lens was created by spraying a sheet of graphene oxide solution, then molding the lens using a laser beam. It can resolve objects as small as 200 nanometers, and see into the near infrared. It breaks the diffraction limit and achieve a focal length less than half the wavelength of light. Possible applications include thermal imaging for mobile phones, endoscopes, nanosatellites and photonic chips in supercomputers and superfast broadband distribution.
=== Infrared light detection ===
Graphene reacts to the infrared spectrum at room temperature, albeit with sensitivity 100 to 1000 times too low for practical applications. However, two graphene layers separated by an insulator allowed an electric field produced by holes left by photo-freed electrons in one layer to affect a current running through the other layer. The process produces little heat, making it suitable for use in night-vision optics. The sandwich is thin enough to be integrated in handheld devices, eyeglass-mounted computers and even contact lenses.
=== Photodetector ===
A graphene/n-type silicon heterojunction has been demonstrated to exhibit strong rectifying behavior and high photoresponsivity. By introducing a thin interfacial oxide layer, the dark current of graphene/n-Si heterojunction has been reduced by two orders of magnitude at zero bias. At room temperature, the graphene/n-Si photodetector with interfacial oxide exhibits a specific detectivity up to 5.77 × 1013 cm Hz1/2 W2 at the peak wavelength of 890 nm in vacuum. In addition, the improved graphene/n-Si heterojunction photodetectors possess high responsivity of 0.73 A W−1 and high photo-to-dark current ratio of ≈107. These results demonstrate that graphene/Si heterojunction with interfacial oxide is promising for the development of high detectivity photodetectors. Recently, a graphene/si Schottky photodetector with record-fast response speed (< 25 ns) from wavelength 350 nm to 1100 nm are presented. The photodetectors exhibit excellent long-term stability even stored in air for more than 2 years. These results not only advance the development of high-performance photodetectors based on the graphene/Si Schottky junction, but also have important implications for mass-production of graphene-based photodetector array devices for cost-effective environmental monitoring, medical images, free-space communications, photoelectric smart-tracking, and integration with CMOS circuits for emerging interest-of-things applications, etc.
== Energy ==
=== Generation ===
==== Ethanol distillation ====
Graphene oxide membranes allow water vapor to pass through, but are impermeable to other liquids and gases. This phenomenon has been used for further distilling of vodka to higher alcohol concentrations, in a room-temperature laboratory, without the application of heat or vacuum as used in traditional distillation methods.
==== Solar cells ====
Graphene has been used on different substrates such as Si, CdS and CdSe to produce Schottky junction solar cells. Through the properties of graphene, such as graphene's work function, solar cell efficiency can be optimized. An advantage of graphene electrodes is the ability to produce inexpensive Schottky junction solar cells.
===== Charge conductor =====
Graphene solar cells use graphene's unique combination of high electrical conductivity and optical transparency. This material absorbs only 2.6% of green light and 2.3% of red light. Graphene can be assembled into a film electrode with low roughness. These films must be made thicker than one atomic layer to obtain useful sheet resistances. This added resistance can be offset by incorporating conductive filler materials, such as a silica matrix. Reduced conductivity can be offset by attaching large aromatic molecules such as pyrene-1-sulfonic acid sodium salt (PyS) and the disodium salt of 3,4,9,10-perylenetetracarboxylic diimide bisbenzenesulfonic acid (PDI). These molecules, under high temperatures, facilitate better π-conjugation of the graphene basal plane.
===== Light collector =====
Using graphene as a photoactive material requires its bandgap to be 1.4–1.9 eV. In 2010, single cell efficiencies of nanostructured graphene-based PVs of over 12% were achieved. According to P. Mukhopadhyay and R. K. Gupta organic photovoltaics could be "devices in which semiconducting graphene is used as the photoactive material and metallic graphene is used as the conductive electrodes".
In 2008, chemical vapor deposition produced graphene sheets by depositing a graphene film made from methane gas on a nickel plate. A protective layer of thermoplastic is laid over the graphene layer and the nickel underneath is then dissolved in an acid bath. The final step is to attach the plastic-coated graphene to a flexible polymer sheet, which can then be incorporated into a PV cell. Graphene/polymer sheets range in size up to 150 square centimeters and can be used to create dense arrays.
Silicon generates only one current-driving electron for each photon it absorbs, while graphene can produce multiple electrons. Solar cells made with graphene could offer 60% conversion efficiency.
==== Electrode ====
In 2010, researchers first reported creating a graphene-silicon heterojunction solar cell, where graphene served as a transparent electrode and introduced a built-in electric field near the interface between the graphene and n-type silicon to help collect charge carriers. In 2012 researchers reported efficiency of 8.6% for a prototype consisting of a silicon wafer coated with trifluoromethanesulfonyl-amide (TFSA) doped graphene. Doping increased efficiency to 9.6% in 2013. In 2015 researchers reported efficiency of 15.6% by choosing the optimal oxide thickness on the silicon. This combination of carbon materials with traditional silicon semiconductors to fabricate solar cells has been a promising field of carbon science.
In 2013, another team reported 15.6% percent by combining titanium oxide and graphene as a charge collector and perovskite as a sunlight absorber. The device is manufacturable at temperatures under 150 °C (302 °F) using solution-based deposition. This lowers production costs and offers the potential using flexible plastics.
In 2015, researchers developed a prototype cell that used semitransparent perovskite with graphene electrodes. The design allowed light to be absorbed from both sides. It offered efficiency of around 12 percent with estimated production costs of less than $0.06/watt. The graphene was coated with PEDOT:PSS conductive polymer (polythiophene) polystyrene sulfonate). Multilayering graphene via CVD created transparent electrodes reducing sheet resistance. Performance was further improved by increasing contact between the top electrodes and the hole transport layer.
==== Fuel cells ====
Appropriately perforated graphene (and hexagonal boron nitride hBN) can allow protons to pass through it, offering the potential for using graphene monolayers as a barrier that blocks hydrogen atoms but not protons/ionized hydrogen (hydrogen atoms with their electrons stripped off). They could even be used to extract hydrogen gas out of the atmosphere that could power electric generators with ambient air.
The membranes are more effective at elevated temperatures and when covered with catalytic nanoparticles such as platinum.
Graphene could solve a major problem for fuel cells: fuel crossover that reduces efficiency and durability.
In methanol fuel cells, graphene used as a barrier layer in the membrane area, has reduced fuel cross over with negligible proton resistance, improving the performance.
At room temperature, proton conductivity with monolayer hBN, outperforms graphene, with resistivity to proton flow of about 10 Ω cm2 and a low activation energy of about 0.3 electronvolts. At higher temperatures, graphene outperforms with resistivity estimated to fall below 10−3 Ω cm2 above 250 degrees Celsius.
In another project, protons easily pass through slightly imperfect graphene membranes on fused silica in water. The membrane was exposed to cycles of high and low pH. Protons transferred reversibly from the aqueous phase through the graphene to the other side where they undergo acid–base chemistry with silica hydroxyl groups. Computer simulations indicated energy barriers of 0.61–0.75 eV for hydroxyl-terminated atomic defects that participate in a Grotthuss-type relay, while pyrylium-like ether terminations did not. Recently, Paul and co-workers at IISER Bhopal demonstrated solid state proton conduction for oxygen functionalized few-layer graphene (8.7x10−3 S/cm) with a low activation barrier (0.25 eV).
==== Thermoelectrics ====
Adding 0.6% graphene to a mixture of lanthanum and partly reduced strontium titanium oxide produces a strong Seebeck at temperatures ranging from room temperature to 750 °C (compared to 500–750 without graphene). The material converts 5% of the heat into electricity (compared to 1% for strontium titanium oxide.)
==== Condenser coating ====
In 2015 a graphene coating on steam condensers quadrupled condensation efficiency, increasing overall plant efficiency by 2–3 percent.
=== Storage ===
==== Supercapacitor ====
Due to graphene's high surface-area-to-mass ratio, one potential application is in the conductive plates of supercapacitors.
In February 2013 researchers announced a novel technique to produce graphene supercapacitors based on the DVD burner reduction approach.
In 2014 a supercapacitor was announced that was claimed to achieve energy density comparable to current lithium-ion batteries.
In 2015 the technique was adapted to produce stacked, 3-D supercapacitors. Laser-induced graphene was produced on both sides of a polymer sheet. The sections were then stacked, separated by solid electrolytes, making multiple microsupercapacitors. The stacked configuration substantially increased the energy density of the result. In testing, the researchers charged and discharged the devices for thousands of cycles with almost no loss of capacitance. The resulting devices were mechanically flexible, surviving 8,000 bending cycles. This makes them potentially suitable for rolling in a cylindrical configuration. Solid-state polymeric electrolyte-based devices exhibit areal capacitance of >9 mF/cm2 at a current density of 0.02 mA/cm2, over twice that of conventional aqueous electrolytes.
Also in 2015 another project announced a microsupercapacitor that is small enough to fit in wearable or implantable devices. Just one-fifth the thickness of a sheet of paper, it is capable of holding more than twice as much charge as a comparable thin-film lithium battery. The design employed laser-scribed graphene, or LSG with manganese dioxide. They can be fabricated without extreme temperatures or expensive "dry rooms". Their capacity is six times that of commercially available supercapacitors. The device reached volumetric capacitance of over 1,100 F/cm3. This corresponds to a specific capacitance of the constituent MnO2 of 1,145 F/g, close to the theoretical maximum of 1,380 F/g. Energy density varies between 22 and 42 Wh/L depending on device configuration.
In May 2015 a boric acid-infused, laser-induced graphene supercapacitor tripled its areal energy density and increased its volumetric energy density 5-10 fold. The new devices proved stable over 12,000 charge-discharge cycles, retaining 90 percent of their capacitance. In stress tests, they survived 8,000 bending cycles.
==== Batteries ====
Silicon-graphene anode lithium ion batteries were demonstrated in 2012.
Stable lithium ion cycling was demonstrated in bi- and few layer graphene films grown on nickel substrates, while single layer graphene films have been demonstrated as a protective layer against corrosion in battery components such as the battery case. This creates possibilities for flexible electrodes for microscale Li-ion batteries, where the anode acts as the active material and the current collector.
Researchers built a lithium-ion battery made of graphene and silicon, which was claimed to last over a week on one charge and took only 15 minutes to charge.
In 2015 argon-ion based plasma processing was used to bombard graphene samples with argon ions. That knocked out some carbon atoms and increased the capacitance of the materials three-fold. These "armchair" and "zigzag" defects are named based on the configurations of the carbon atoms that surround the holes.
In 2016, Huawei announced graphene-assisted lithium-ion batteries with greater heat tolerance and twice the life span of traditional Lithium-Ion batteries, the component with the shortest life span in mobile phones.
Graphene with controlled topological defects has been demonstrated to adsorb more ions, resulting in high-efficiency batteries.
=== Transmission ===
==== Conducting Wire ====
Due to Graphene's high electrical and thermal conductivity, mechanical strength, and corrosion resistance, one potential application is in high-power energy transmission.
Copper wire has long been used for power transmission for its high conductivity, ductility, and low costs. However, traditional wire fails to meet the transmission requirements of many new technologies. Thermally dependent resistivity in mesoscopic copper wire limits efficiency and current carrying capacity in small-scale electronics. Additionally, copper wire exhibits internal failure by electromigration at high current density, limiting miniaturization of wire. Copper's high weight and low temperature oxidation also limit its applications in high-power transmission. Increasing demand for high ampacity transmission in electronics and electric vehicle applications necessitate improvements in conductor technology.
Graphene-copper composite conductors are a promising alternative to standard conductors in high-power applications.
In 2013, researchers demonstrated a one-hundred-fold increase in current carrying capacity with carbon nanotube-copper composite wires when compared to traditional copper wire. These composite wires exhibited a temperature coefficient of resistivity an order of magnitude smaller than copper wires, an important feature for high load applications.
===== Graphene-clad wire =====
Additionally, in 2021, researchers demonstrated a 4.5 times increase in the current density breakdown limit of copper wire with an axially continuous graphene shell. The copper wire was coated by a continuous graphene sheet through chemical vapor deposition. The coated wire exhibited reduced oxidation of the wire during joule heating, increased heat dissipation (224% higher), and increased conductivity (41% higher).
== Sensors ==
=== Biosensors ===
Graphene does not oxidize in air or in biological fluids, making it an attractive material for use as a biosensor. A graphene circuit can be configured as a field effect biosensor by applying biological capture molecules and blocking layers to the graphene, then controlling the voltage difference between the graphene and the liquid that includes the biological test sample. Of the various types of graphene sensors that can be made, biosensors were the first to be available for sale.
=== Pressure sensors ===
The electronic properties of graphene/h-BN heterostructures can be modulated by changing the interlayer distances via applying external pressure, leading to potential realization of atomic thin pressure sensors. In 2011 researchers proposed an in-plane pressure sensor consisting of graphene sandwiched between hexagonal boron nitride and a tunneling pressure sensor consisting of h-BN sandwiched by graphene. The current varies by 3 orders of magnitude as pressure increases from 0 to 5 nN/nm2. This structure is insensitive to the number of wrapping h-BN layers, simplifying process control. Because h-BN and graphene are inert to high temperature, the device could support ultra-thin pressure sensors for application under extreme conditions.
In 2016 researchers demonstrated a biocompatible pressure sensor made from mixing graphene flakes with cross-linked polysilicone (found in silly putty).
=== NEMS ===
Nanoelectromechanical systems (NEMS) can be designed and characterized by understanding the interaction and coupling between the mechanical, electrical, and the van der Waals energy domains. Quantum mechanical limit governed by Heisenberg uncertainty relation decides the ultimate precision of nanomechanical systems. Quantum squeezing can improve the precision by reducing quantum fluctuations in one desired amplitude of the two quadrature amplitudes. Traditional NEMS hardly achieve quantum squeezing due to their thickness limits. A scheme to obtain squeezed quantum states through typical experimental graphene NEMS structures taking advantages of its atomic scale thickness has been proposed.
=== Molecular absorption ===
Theoretically graphene makes an excellent sensor due to its 2D structure. The fact that its entire volume is exposed to its surrounding environment makes it very efficient to detect adsorbed molecules. However, similar to carbon nanotubes, graphene has no dangling bonds on its surface. Gaseous molecules cannot be readily adsorbed onto graphene surfaces, so intrinsically graphene is insensitive. The sensitivity of graphene chemical gas sensors can be dramatically enhanced by functionalization, for example, coating the film with a thin layer of certain polymers. The thin polymer layer acts like a concentrator that absorbs gaseous molecules. The molecule absorption introduces a local change in electrical resistance of graphene sensors. While this effect occurs in other materials, graphene is superior due to its high electrical conductivity (even when few carriers are present) and low noise, which makes this change in resistance detectable.
=== Piezoelectric effect ===
Density functional theory simulations predict that depositing certain adatoms on graphene can render it piezoelectrically responsive to an electric field applied in the out-of-plane direction. This type of locally engineered piezoelectricity is similar in magnitude to that of bulk piezoelectric materials and makes graphene a candidate for control and sensing in nanoscale devices.
=== Body motion ===
Promoted by the demand for wearable devices, graphene has been proved to be a promising material for potential applications in flexible and highly sensitive strain sensors. An environment-friendly and cost-effective method to fabricate large-area ultrathin graphene films is proposed for highly sensitive flexible strain sensor. The assembled graphene films are derived rapidly at the liquid/air interface by Marangoni effect and the area can be scaled up. These graphene-based strain sensors exhibit extremely high sensitivity with gauge factor of 1037 at 2% strain, which represents the highest value for graphene platelets at this small deformation so far.
Rubber bands infused with graphene ("G-bands") can be used as inexpensive body sensors. The bands remain pliable and can be used as a sensor to measure breathing, heart rate, or movement. Lightweight sensor suits for vulnerable patients could make it possible to remotely monitor subtle movement. These sensors display 10×104-fold increases in resistance and work at strains exceeding 800%. Gauge factors of up to 35 were observed. Such sensors can function at vibration frequencies of at least 160 Hz. At 60 Hz, strains of at least 6% at strain rates exceeding 6000%/s can be monitored.
=== Magnetic ===
In 2015 researchers announced a graphene-based magnetic sensor 100 times more sensitive than an equivalent device based on silicon (7,000 volts per amp-tesla). The sensor substrate was hexagonal boron nitride. The sensors were based on the Hall effect, in which a magnetic field induces a Lorentz force on moving electric charge carriers, leading to deflection and a measurable Hall voltage. In the worst case graphene roughly matched a best case silicon design. In the best case graphene required lower source current and power requirements.
== Environmental ==
=== Contaminant removal ===
Graphene oxide is non-toxic and biodegradable. Its surface is covered with epoxy, hydroxyl, and carboxyl groups that interact with cations and anions. It is soluble in water and forms stable colloid suspensions in other liquids because it is amphiphilic (able to mix with water or oil). Dispersed in liquids it shows excellent sorption capacities. It can remove copper, cobalt, cadmium, arsenate, and organic solvents.
=== Water filtration ===
Research suggests that graphene filters could outperform other techniques of desalination by a significant margin.
In 2021, researchers found that a reusable graphene foam could efficiently filter uranium (and possibly other heavy metals such as lead, mercury and cadmium) from water at the rate of 4 grams of uranium/gram of graphene.
=== Permeation barrier ===
Instead of allowing the permeation, blocking is also necessary. Gas permeation barriers are important for almost all applications ranging from food, pharmaceutical, medical, inorganic and organic electronic devices, etc. packaging. It extends the life of the product and allows keeping the total thickness of devices small. Being atomically thin, defectless graphene is impermeable to all gases. In particular, ultra-thin moisture permeation barrier layers based on graphene are shown to be important for organic-FETs and OLEDs. Graphene barrier applications in biological sciences are under study.
== Other ==
=== Art preservation ===
In 2021, researchers reported that a graphene veil reversibly applied via chemical vapor deposition was able to preserve the colors in art objects (70%).
=== Aviation ===
In 2016, researchers developed a prototype de-icing system that incorporated unzipped carbon nanotube graphene nanoribbons in an epoxy/graphene composite. In laboratory tests, the leading edge of a helicopter rotor blade was coated with the composite, covered by a protective metal sleeve. Applying an electrical current heated the composite to over 200 °F (93 °C), melting a 1 cm (0.4 in)-thick ice layer with ambient temperatures of a -4 °F (-20 °C).
=== Catalyst ===
In 2014, researchers at the University of Western Australia discovered nano sized fragments of graphene can speed up the rate of chemical reactions. In 2015, researchers announced an atomic scale catalyst made of graphene doped with nitrogen and augmented with small amounts of cobalt whose onset voltage was comparable to platinum catalysts. In 2016 iron-nitrogen complexes embedded in graphene were reported as another form of catalyst. The new material was claimed to approach the efficiency of platinum catalysts. The approach eliminated the need for less efficient iron nanoparticles.
=== Coolant additive ===
Graphene's high thermal conductivity suggests that it could be used as an additive in coolants. Preliminary research work showed that 5% graphene by volume can enhance the thermal conductivity of a base fluid by 86%. Another application due to graphene's enhanced thermal conductivity was found in PCR.
=== Lubricant ===
Scientists discovered using graphene as a lubricant works better than traditionally used graphite. A one atom thick layer of graphene in between a steel ball and steel disc lasted for 6,500 cycles. Conventional lubricants lasted 1,000 cycles.
=== Nanoantennas ===
A graphene-based plasmonic nano-antenna (GPN) can operate efficiently at millimeter radio wavelengths. The wavelength of surface plasmon polaritons for a given frequency is several hundred times smaller than the wavelength of freely propagating electromagnetic waves of the same frequency. These speed and size differences enable efficient graphene-based antennas to be far smaller than conventional alternatives. The latter operate at frequencies 100–1000 times larger than GPNs, producing 0.01–0.001 as many photons.
An electromagnetic (EM) wave directed vertically onto a graphene surface excites the graphene into oscillations that interact with those in the dielectric on which the graphene is mounted, thereby forming surface plasmon polaritons (SPP). When the antenna becomes resonant (an integral number of SPP wavelengths fit into the physical dimensions of the graphene), the SPP/EM coupling increases greatly, efficiently transferring energy between the two.
A phased array antenna 100 μm in diameter could produce 300 GHz beams only a few degrees in diameter, instead of the 180 degree radiation from a conventional metal antenna of that size. Potential uses include smart dust, low-power terabit wireless networks and photonics.
A nanoscale gold rod antenna captured and transformed EM energy into graphene plasmons, analogous to a radio antenna converting radio waves into electromagnetic waves in a metal cable. The plasmon wave fronts can be directly controlled by adjusting antenna geometry. The waves were focused (by curving the antenna) and refracted (by a prism-shaped graphene bilayer because the conductivity in the two-atom-thick prism is larger than in the surrounding one-atom-thick layer.)
The plasmonic metal-graphene nanoantenna was composed by inserting a few nanometers of oxide between a dipole gold nanorod and the monolayer graphene. The used oxide layer here can reduce the quantum tunneling effect between graphene and metal antenna. With tuning the chemical potential of the graphene layer through field effect transistor architecture, the in-phase and out-phase mode coupling between graphene plasmonics and metal plasmonics is realized. The tunable properties of the plasmonic metal-graphene nanoantenna can be switched on and off via modifying the electrostatic gate-voltage on graphene.
=== Plasmonics and metamaterials ===
Graphene accommodates a plasmonic surface mode, observed recently via near field infrared optical microscopy techniques and infrared spectroscopy Potential applications are in the terahertz to mid-infrared frequencies, such as terahertz and midinfrared light modulators, passive terahertz filters, mid-infrared photodetectors and biosensors.
=== Radio wave absorption ===
Stacked graphene layers on a quartz substrate increased the absorption of millimeter (radio) waves by 90 per cent over 125–165 GHz bandwidth, extensible to microwave and low-terahertz frequencies, while remaining transparent to visible light. For example, graphene could be used as a coating for buildings or windows to block radio waves. Absorption is a result of mutually coupled Fabry–Perot resonators represented by each graphene-quartz substrate. A repeated transfer-and-etch process was used to control surface resistivity.
=== Redox ===
Graphene oxide can be reversibly reduced and oxidized via electrical stimulus. Controlled reduction and oxidation in two-terminal devices containing multilayer graphene oxide films are shown to result in switching between partly reduced graphene oxide and graphene, a process that modifies electronic and optical properties. Oxidation and reduction are related to resistive switching.
=== Reference material ===
Graphene's properties suggest it as a reference material for characterizing electroconductive and transparent materials. One layer of graphene absorbs 2.3% of red light.
This property was used to define the conductivity of transparency that combines sheet resistance and transparency. This parameter was used to compare materials without the use of two independent parameters.
=== Soundproofing ===
Researchers demonstrated a graphene-oxide-based aerogel that could reduce noise by up to 16 decibels. The aerogel weighed 2.1 kilograms per cubic metre (0.13 lb/cu ft). A conventional polyester urethane sound absorber might weigh 32 kilograms per cubic metre (2.0 lb/cu ft). One possible application is to reduce sound levels in airplane cabins.
=== Sound transducers ===
Graphene's light weight provides relatively good frequency response, suggesting uses in electrostatic audio speakers and microphones. In 2015 an ultrasonic microphone and speaker were demonstrated that could operate at frequencies from 20 Hz–500 kHz. The speaker operated at a claimed 99% efficiency with a flat frequency response across the audible range. One application was as a radio replacement for long-distance communications, given sound's ability to penetrate steel and water, unlike radio waves.
=== Structural material ===
Graphene's strength, stiffness and lightness suggested it for use with carbon fiber. Graphene has been used as a reinforcing agent to improve the mechanical properties of biodegradable polymeric nanocomposites for engineering bone tissue.
It has also been used as a strengthening agent in concrete.
=== Thermal management ===
In 2011, researchers reported that a three-dimensional, vertically aligned, functionalized multilayer graphene architecture can be an approach for graphene-based thermal interfacial materials (TIMs) with superior thermal conductivity and ultra-low interfacial thermal resistance between graphene and metal.
Graphene-metal composites can be used in thermal interface materials.
Adding a layer of graphene to each side of a copper film increased the metal's heat-conducting properties up to 24%. This suggests the possibility of using them for semiconductor interconnects in computer chips. The improvement is the result of changes in copper's nano- and microstructure, not from graphene's independent action as an added heat conducting channel. High temperature chemical vapor deposition stimulates grain size growth in copper films. The larger grain sizes improve heat conduction. The heat conduction improvement was more pronounced in thinner copper films, which is useful as copper interconnects shrink.
Attaching graphene functionalized with silane molecules increases its thermal conductivity (κ) by 15–56% with respect to the number density of molecules. This is because of enhanced in-plane heat conduction resulting from the simultaneous increase of thermal resistance between the graphene and the substrate, which limited cross-plane phonon scattering. Heat spreading ability doubled.
However, mismatches at the boundary between horizontally adjacent crystals reduces heat transfer by a factor of 10.
=== Waterproof coating ===
Graphene could potentially usher in a new generation of waterproof devices whose chassis may not need to be sealed like today's devices.
== See also ==
Graphene applications as optical lenses
Hong Byung-hee
== References == | Wikipedia/Graphene_transistor |
The foundry model is a microelectronics engineering and manufacturing business model consisting of a semiconductor fabrication plant, or foundry, and an integrated circuit design operation, each belonging to separate companies or subsidiaries. It was first conceived by Morris Chang, the founder of the Taiwan Semiconductor Manufacturing Company Limited (TSMC).
Integrated device manufacturers (IDMs) design and manufacture integrated circuits. Many companies, known as fabless semiconductor companies, only design devices; merchant or pure play foundries only manufacture devices for other companies, without designing them. Examples of IDMs are Intel, Samsung, and Texas Instruments,
examples of pure play foundries are GlobalFoundries, TSMC, and UMC, and examples of fabless companies are AMD, Nvidia, and Qualcomm.
Integrated circuit production facilities are expensive to build and maintain. Unless they can be kept at nearly full use, they will become a drain on the finances of the company that owns them. The foundry model uses two methods to avoid these costs: fabless companies avoid costs by not owning such facilities. Merchant foundries, on the other hand, find work from the worldwide pool of fabless companies, through careful scheduling, pricing, and contracting, keep their plants in full use.
== History ==
Companies that both designed and produced the devices were originally responsible for manufacturing microelectronic devices. These manufacturers were involved in both the research and development of manufacturing processes and the research and development of microcircuit design.
The first pure play semiconductor company is the Taiwan Semiconductor Manufacturing Corporation founded by Morris Chang, a spin-off of the government Industrial Technology Research Institute, which split its design and fabrication divisions in 1987, a model advocated for by Carver Mead in the U.S., but deemed too costly to pursue. The separation of design and fabrication became known as the foundry model, with fabless manufacturing outsourcing to semiconductor foundries.
Fabless semiconductor companies do not have any semiconductor fabrication capability, instead contracting with a merchant foundry for fabrication. The fabless company concentrates on the research and development of an IC-product; the foundry concentrates on manufacturing and testing the physical product. If the foundry does not have any semiconductor design capability, it is a pure-play semiconductor foundry.
An absolute separation into fabless and foundry companies is not necessary. Many companies continue to exist that perform both operations and benefit from the close coupling of their skills. Some companies manufacture some of their own designs and contract out to have others manufactured or designed, in cases where they see value or seek special skills. The foundry model is a business model that seeks to optimize productivity.
=== MOSIS ===
The very first merchant foundries were part of the MOSIS service. The MOSIS service gave limited production access to designers with limited means, such as students, university researchers, and engineers at small startups. The designer submitted designs, and these submissions were manufactured with the commercial company's extra capacity. Manufacturers could insert some wafers for a MOSIS design into a collection of their own wafers when a processing step was compatible with both operations. The commercial company (serving as foundry) was already running the process, so they were effectively being paid by MOSIS for something they were already doing. A factory with excess capacity during slow periods could also run MOSIS designs to avoid having expensive capital equipment stand idle.
Under-use of an expensive manufacturing plant could lead to the financial ruin of the owner, so selling surplus wafer capacity was a way to maximize the fab's use. Hence, economic factors created a climate where fab operators wanted to sell surplus wafer-manufacturing capacity and designers wanted to purchase manufacturing capacity rather than try to build it.
Although MOSIS opened the doors to some fabless customers, earning additional revenue for the foundry and providing inexpensive service to the customer, running a business around MOSIS production was difficult. The merchant foundries sold wafer capacity on a surplus basis, as a secondary business activity. Services to the customers were secondary to the commercial business, with little guarantee of support. The choice of merchant dictated the design, development flow, and available techniques to the fabless customer. Merchant foundries might require proprietary and non-portable preparation steps. Foundries concerned with protecting what they considered trade secrets of their methodologies might only be willing to release data to designers after an onerous nondisclosure procedure.
=== Dedicated foundry ===
In 1987, the world's first dedicated merchant foundry opened its doors: Taiwan Semiconductor Manufacturing Company (TSMC). The distinction of 'dedicated' is in reference to the typical merchant foundry of the era, whose primary business activity was building and selling of its own IC-products. The dedicated foundry offers several key advantages to its customers: first, it does not sell finished IC-products into the supply channel; thus a dedicated foundry will never compete directly with its fabless customers (obviating a common concern of fabless companies). Second, the dedicated foundry can scale production capacity to a customer's needs, offering low-quantity shuttle services in addition to full-scale production lines. Finally, the dedicated foundry offers a "COT-flow" (customer owned tooling) based on industry-standard EDA systems, whereas many IDM merchants required its customers to use proprietary (non-portable) development tools. The COT advantage gave the customer complete control over the design process, from concept to final design.
== Foundry sales leaders by year ==
Pure-play semiconductor foundry is a company that does not offer a significant amount of IC products of its own design, but instead operates semiconductor fabrication plants focused on producing ICs for other companies.
Integrated device manufacturer (IDM) semiconductor foundry is where companies such as Texas Instruments, IBM, and Samsung join in to provide foundry services as long as there is no conflict of interest between relevant parties.
=== 2023 ===
=== 2017 ===
=== 2016–2014 ===
=== 2013 ===
=== 2011 ===
=== 2010 ===
=== 2009–2007 ===
As of 2009, the top 17 semiconductor foundries were:
(1) Now acquired by GlobalFoundries
=== 2008–2006 ===
As of 2008, the top 18 pure-play semiconductor foundries were:
(1) Merged with CR Logic in 2008, reclassified as an IDM foundry
=== 2007–2005 ===
As of 2007, the top 14 semiconductor foundries include:
For ranking in worldwide:
=== 2004 ===
As of 2004, the top 10 pure-play semiconductor foundries were:
== Financial and IP issues ==
Like all industries, the semiconductor industry faces upcoming challenges and obstacles.
The cost to stay on the leading edge has steadily increased with each generation of chips. The financial strain is being felt by both large merchant foundries and their fabless customers. The cost of a new foundry exceeds $1 billion. These costs must be passed on to customers. Many merchant foundries have entered into joint ventures with their competitors in an effort to split research and design expenditures and fab-maintenance expenses.
Chip design companies sometimes avoid other companies' patents simply by purchasing the products from a licensed foundry with broad cross-license agreements with the patent owner.
Stolen design data is also a concern; data is rarely directly copied, because blatant copies are easily identified by distinctive features in the chip,
placed there either for this purpose or as a byproduct of the design process. However, the data including any procedure, process system, method of operation or concept may be sold to a competitor, who may save months or years of tedious reverse engineering.
== See also ==
== References ==
== External links ==
Compound Semiconductor.net: "Foundry model could be key to InP industry future" | Wikipedia/Foundry_model |
An application-specific instruction set processor (ASIP) is a component used in system on a chip design. The instruction set architecture of an ASIP is tailored to benefit a specific application. This specialization of the core provides a tradeoff between the flexibility of a general purpose central processing unit (CPU) and the performance of an application-specific integrated circuit (ASIC).
Some ASIPs have a configurable instruction set. Usually, these cores are divided into two parts: static logic which defines a minimum ISA (instruction-set architecture) and configurable logic which can be used to design new instructions. The configurable logic can be programmed either in the field in a similar fashion to a field-programmable gate array (FPGA) or during the chip synthesis. ASIPs have two ways of generating code: either through a retargetable code generator or through a retargetable compiler generator. The retargetable code generator uses the application, ISA, and Architecture Template to create the code generator for the object code. The retargetable compiler generator uses only the ISA and Architecture Template as the basis for creating the compiler. The application code will then be used by the compiler to create the object code.
ASIPs can be used as an alternative of hardware accelerators for baseband signal processing or video coding. Traditional hardware accelerators for these applications suffer from inflexibility. It is very difficult to reuse the hardware datapath with handwritten finite-state machines (FSM). The retargetable compilers of ASIPs help the designer to update the program and reuse the datapath. Typically, the ASIP design is more or less dependent on the tool flow because designing a processor from scratch can be very complicated. One approach is to describe the processor using a high level language and then to automatically generate the ASIP's software toolset.
== Examples ==
RISC-V Instruction Set Architecture (ISA) provides minimum base instruction sets that can be extended with additional application-specific instructions. The base instruction sets provide simplified control flow, memory and arithmetic operations on registers. Its modular design allows the base instructions to be extended for standard application-specific operations such as integer multiplication/division (M), single-precision floating point (F), or bit manipulation (B). For the non-standard instruction extensions, encoding space of the ISA is divided into three parts: standard, reserverd, and custom. The custom encoding space is used for vendor-specific extensions.
== See also ==
Application-specific integrated circuit
System on Chip
Digital signal processor
== References ==
== Literature ==
Dake Liu (2008). Embedded DSP Processor Design: Application Specific Instruction Set Processors. MA: Elsevier Mogan Kaufmann. ISBN 978-0-12-374123-3.
Oliver Schliebusch; Heinrich Meyr; Rainer Leupers (2007). Optimized ASIP Synthesis from Architecture Description Language Models. Dordrecht: Springer. ISBN 978-1-4020-5685-7.
Leupers, Rainer; Ienne, Paolo, eds. (2006). Customizable Embedded Processors. San Mateo, CA: Morgan Kaufmann. ISBN 978-0-12-369526-0.
Gries, Matthias; Keutzer, Kurt, eds. (2005). Building ASIPs: The Mescal Methodology. New York: Springer. ISBN 978-0-387-26057-0.
== External links ==
TTA-Based Codesign Environment (TCE), an open source (MIT licensed) toolset for design of application specific TTA processors. | Wikipedia/Application-specific_instruction_set_processor |
The Czochralski method, also Czochralski technique or Czochralski process, is a method of crystal growth used to obtain single crystals (monocrystals) of semiconductors (e.g. silicon, germanium and gallium arsenide), metals (e.g. palladium, platinum, silver, gold), salts and synthetic gemstones. The method is named after Polish scientist Jan Czochralski, who invented the method in 1915 while investigating the crystallization rates of metals. He made this discovery by accident: instead of dipping his pen into his inkwell, he dipped it in molten tin, and drew a tin filament, which later proved to be a single crystal. The process remains economically important, as roughly 90% of all modern-day semiconductor devices use material derived from this method.
The most important application may be the growth of large cylindrical ingots, or boules, of single crystal silicon used in the electronics industry to make semiconductor devices like integrated circuits. Other semiconductors, such as gallium arsenide, can also be grown by this method, although lower defect densities in this case can be obtained using variants of the Bridgman–Stockbarger method. Other semiconductors such as Silicon Carbide are grown using other methods such as physical vapor transport.
The method is not limited to production of metal or metalloid crystals. For example, it is used to manufacture very high-purity crystals of salts, including material with controlled isotopic composition, for use in particle physics experiments, with tight controls (part per billion measurements) on confounding metal ions and water absorbed during manufacture.
== History ==
=== Early Developments (1915–1930s) ===
Jan Czochralski invented his method in 1916 at AEG in Germany while investigating the crystallization velocities of metals. His technique—originally reported in 1918—formed the basis for growing single crystals by pulling material from the melt. Until 1923, modifications to the method were confined mainly to Berlin‐based groups.
Shortly thereafter, in 1925, E.P.T. Tyndall's group at the University of Iowa grew zinc crystals using the Czochralski method for nearly a decade; these early crystals reached maximum diameters of about 3.5 mm and lengths of up to 35 cm.
The development of the fundamental process would be completed in 1937 by Henry Walther at Bell Telephone Laboratories. Walther introduced crystal rotation—a technique that compensates for thermal asymmetries—and implemented dynamic cooling control via an adjustable gas stream. His innovations enabled precise control over crystal shape and diameter and allowed the first growth of true bulk crystals, including high-melting-point materials such as sodium chloride. Walther’s work laid the foundation for the modern Czochralski process.
=== Post–World War II Revival (1940s–1950s) ===
The strategic importance of semiconductors following World War II led Gordon Teal, then employed at Bell Labs to revive the Czochralski method for single crystal growth. In the early 1950s, high-quality germanium crystals were grown to meet the emerging demands of transistor technology, and soon after, silicon crystals were produced. This renewed interest marked the beginning of a rapid expansion in the use of the technique in the United States.
=== Global Spread and Process Refinements (Late 1950s–Present) ===
The adoption of the Czochralski method expanded internationally in the late 1950s. In Europe, Germany employed the technique for semiconductor crystals as early as 1952, followed by France in 1953, the United Kingdom and Russia in 1956, the Czech Republic in 1957, and finally Switzerland and the Netherlands in 1959. In Japan, the technique began to be used in 1959, with its applications and technical improvements accelerating during the 1960s.
During this period several key process modifications were introduced that further refined the Czochralski method: • The hot-wall technique (circa 1956) reduced evaporation losses from the melt. • The continuous melt feed method (circa 1956) stabilized the melt composition. • The Liquid Encapsulated Czochralski (LEC) technique (introduced in 1962) enabled the growth of compound semiconductor crystals by suppressing the evaporation of volatile components. • Automatic diameter control using crystal or crucible weighing (introduced in 1972–73) allowed for more precise regulation of crystal dimensions.
These innovations extended the versatility of the Czochralski process, paving the way for industrial-scale production of high-quality single crystals across a wide range of materials.
== Application ==
Monocrystalline silicon (mono-Si) grown by the Czochralski method is often referred to as monocrystalline Czochralski silicon (Cz-Si). It is the basic material in the production of integrated circuits used in computers, TVs, mobile phones and all types of electronic equipment and semiconductor devices. Monocrystalline silicon is also used in large quantities by the photovoltaic industry for the production of conventional mono-Si solar cells. The almost perfect crystal structure yields the highest light-to-electricity conversion efficiency for silicon.
Use of the Czochralski process is not limited to semiconductor materials; it is extensively utilized in the growth of high-quality optical crystals and synthetic gemstones. This method enables the production of large, high-purity crystals suitable for various optical applications. For instance, synthetic alexandrite—a variety of chrysoberyl—is commonly produced using this technique. Additionally, synthetic sapphire (corundum) is frequently grown through the Czochralski process. Furthermore, yttrium aluminium garnet (YAG), an artificial garnet, has been synthesized using this method. YAG crystals are utilized as diamond simulants and in various optical applications, benefiting from the process's ability to produce large, high-purity crystals.
== Production of Czochralski silicon ==
Semiconductor-grade silicon (only a few parts per million of impurities) is melted in a crucible at 1,425 °C (2,597 °F; 1,698 K), usually made of high-purity quartz. The crucible receives a charge consisting of high-purity polysilicon. Dopant impurity atoms such as boron or phosphorus can be added to the molten silicon in precise amounts to dope the silicon, thus changing it into p-type or n-type silicon, with different electronic properties. A precisely oriented rod-mounted seed crystal is dipped into the molten silicon. The seed crystal's rod is slowly pulled upwards and rotated simultaneously. By precisely controlling the temperature gradients, rate of pulling and speed of rotation, it is possible to extract a large, single-crystal, cylindrical ingot from the melt. Occurrence of unwanted instabilities in the melt can be avoided by investigating and visualizing the temperature and velocity fields during the crystal growth process. This process is normally performed in an inert atmosphere, such as argon, in an inert chamber, such as quartz. The quartz crucible is normally discarded after the process is terminated which normally happens after a single ingot is produced in what is known as a batch process, but it is possible to perform this process continuously, as well as with an applied magnetic field.
== Crystal sizes ==
Due to efficiencies of scale, the semiconductor industry often uses wafers with standardized dimensions, or common wafer specifications. Early on, boules were small, a few centimeters wide. With advanced technology, high-end device manufacturers use 200 mm and 300 mm diameter wafers. Width is controlled by precise control of temperature, speeds of rotation, and the speed at which the seed holder is withdrawn. The crystal ingots from which wafers are sliced can be up to 2 metres in length, weighing several hundred kilograms. Larger wafers allow improvements in manufacturing efficiency, as more chips can be fabricated on each wafer, with lower relative loss, so there has been a steady drive to increase silicon wafer sizes. The next step up, 450 mm, was scheduled for introduction in 2018. Silicon wafers are typically about 0.2–0.75 mm thick, and can be polished to great flatness for making integrated circuits or textured for making solar cells.
== Incorporating impurities ==
When silicon is grown by the Czochralski method, the melt is contained in a silica (quartz) crucible. During growth, the walls of the crucible dissolve into the melt and Czochralski silicon therefore contains oxygen at a typical concentration of 1018 cm−3. Oxygen impurities can have beneficial or detrimental effects. Carefully chosen annealing conditions can give rise to the formation of oxygen precipitates. These have the effect of trapping unwanted transition metal impurities in a process known as gettering, improving the purity of surrounding silicon. However, formation of oxygen precipitates at unintended locations can also destroy electrical structures. Additionally, oxygen impurities can improve the mechanical strength of silicon wafers by immobilising any dislocations which may be introduced during device processing. It was experimentally shown in the 1990s that the high oxygen concentration is also beneficial for the radiation hardness of silicon particle detectors used in harsh radiation environment (such as CERN's LHC/HL-LHC projects). Therefore, radiation detectors made of Czochralski- and magnetic Czochralski-silicon are considered to be promising candidates for many future high-energy physics experiments. It has also been shown that the presence of oxygen in silicon increases impurity trapping during post-implantation annealing processes.
However, oxygen impurities can react with boron in an illuminated environment, such as that experienced by solar cells. This results in the formation of an electrically active boron–oxygen complex that detracts from cell performance. Module output drops by approximately 3% during the first few hours of light exposure.
=== Mathematical form ===
Impurity concentration in the final solid is given by
C
C
0
=
k
(
1
−
V
V
0
)
k
−
1
,
{\displaystyle {\frac {C}{C_{0}}}=k\left(1-{\frac {V}{V_{0}}}\right)^{k-1}{\text{,}}}
where C and C0 are (respectively) the initial and final concentration, V and V0 the initial and final volume, and k the segregation coefficient associated with impurities at the melting phase transition. This follows from the fact that
d
I
=
−
k
O
C
L
d
V
{\displaystyle dI=-k_{O}C_{L}dV}
impurities are removed from the melt when an infinitesimal volume dV freezes.
== See also ==
Float-zone silicon
== References ==
== External links ==
Czochralski doping process
Silicon Wafer Processing Animation on YouTube | Wikipedia/Czochralski_method |
The CHIPS and Science Act is a U.S. federal statute enacted by the 117th United States Congress and signed into law by President Joe Biden on August 9, 2022. The act authorizes roughly $280 billion in new funding to boost domestic research and manufacturing of semiconductors in the United States, for which it appropriates $52.7 billion.
The act includes $39 billion in subsidies for chip manufacturing on U.S. soil along with 25% investment tax credits for costs of manufacturing equipment, and $13 billion for semiconductor research and workforce training, with the dual aim of strengthening American supply chain resilience and countering China.: 1 It also invests $174 billion in the overall ecosystem of public sector research in science and technology, advancing human spaceflight, quantum computing, materials science, biotechnology, experimental physics, research security, social and ethical considerations, workforce development and diversity, equity, and inclusion efforts at NASA, NSF, DOE, EDA, and NIST.
The act does not have an official short title as a whole but is divided into three divisions with their own short titles: Division A is the CHIPS Act of 2022 (where CHIPS stands for the former "Creating Helpful Incentives to Produce Semiconductors" for America Act"); Division B is the Research and Development, Competition, and Innovation Act; and Division C is the Supreme Court Security Funding Act of 2022.
By March 2024, analysts estimated that the act incentivized between 25 and 50 separate potential projects, with total projected investments of $160–200 billion and 25,000–45,000 new jobs. However, these projects are faced with delays in receiving grants due to bureaucratic hurdles and shortages of skilled workers, both during the construction phase and upon completion in the operational/manufacturing stage, where 40% of the permanent new workers will need two-year technician degrees and 60% will need four-year engineering degrees or higher. In addition, Congress had routinely made several funding deals that underfunded key basic research provisions of the Act by tens of billions of dollars.
== History ==
The CHIPS and Science Act combines two bipartisan bills: the Endless Frontier Act, designed to boost investment in domestic high-tech research, and the CHIPS for America Act, designed to bring semiconductor manufacturing back to the U.S. The act is aimed at competing with China.
The Endless Frontier Act was initially presented to senators Chuck Schumer (D-NY) and Todd Young (R-IN) by Under Secretary of State Keith Krach in October 2019, as part of the Global Economic Security Strategy to boost investment in high-tech research vital to U.S. national security. The plan was to grow $150 billion in government R&D funding into a $500 billion investment, with matching investments from the private sector and a coalition of technological allies dubbed the "Techno-Democracies-10" (TD-10). On May 27, 2020, senators Young and Schumer, along with Congressmen Ro Khanna (D-CA) and Mike Gallagher (R-WI.), introduced the bipartisan, bicameral Endless Frontier Act to solidify the United States' leadership in scientific and technological innovation through increased investments in the discovery, creation, and commercialization of technology fields of the future.
The United States Innovation and Competition Act of 2021 (USICA) (S. 1260), formerly known as the Endless Frontier Act, was United States legislation sponsored by Senate majority leader Chuck Schumer and Senator Young authorizing $110 billion for basic and advanced technology research over a five-year period. Investment in basic and advanced research, commercialization, and education and training programs in artificial intelligence, semiconductors, quantum computing, advanced communications, biotechnology and advanced energy, amounts to $100 billion. Over $10 billion was authorized for appropriation to designate ten regional technology hubs and create a supply chain crisis-response program.
The CHIPS for America Act portion stemmed from Under Secretary of State Krach and his team brokering the $12 billion on-shoring of TSMC (Taiwan Semiconductor Manufacturing Company) to secure the supply chain of sophisticated semiconductors, on May 15, 2020. Krach's stated strategy was to use the TSMC announcement as a stimulus for fortifying a trusted supply chain by attracting TSMC's broad ecosystem of suppliers; persuading other chip companies to produce in U.S., especially Intel and Samsung; inspiring universities to develop engineering curricula focused on semiconductor manufacturing and designing a bipartisan bill (CHIPS for America) to provide the necessary funding. This led to Krach and his team's close collaboration in creating the CHIPS for America component with senators John Cornyn (R-TX) and Mark Warner (D-VA). In June 2020, Senator Warner joined U.S. senator John Cornyn in introducing the $52 billion CHIPS for America Act. Elements of the Bioeconomy Research and Development Act of 2021 were also included.
Both bills were eventually merged into the U.S. Innovation and Competition Act (USICA). On June 8, 2021, the USICA passed 68–32 in the Senate with bipartisan support. The House version of the Bill, America COMPETES Act of 2022 (H.R. 4521), passed on February 4, 2022. The Senate passed an amended bill by substituting the text of H.R. 4521 with the text of the USICA on March 28, 2022. A Senate and House conference was required to reconcile the differences, which resulted in the bipartisan CHIPS and Science Act, or "CHIPS Plus". The bill passed the U.S. Senate by a vote of 64–33 on July 27, 2022. On July 28, the $280 billion bill passed the U.S. House by a vote of 243–187–1. On August 1, 2022, the magazine EE Times (Electronic Engineering) dubbed Under Secretary of State Keith Krach (as of February 2023, now the current Chairman of the Krach Institute for Tech Diplomacy at Purdue University) the architect of the CHIPS and Science Act. The bill was signed into law by President Joe Biden on August 9, 2022.
== Background and provisions ==
The law constitutes an industrial policy initiative which takes place against the background of a perceived AI Cold War between the US and China, as artificial intelligence technology relies on semiconductors. The law was considered amidst a global semiconductor shortage and intended to provide subsidies and tax credits to chip makers with operations in the United States. The U.S. Department of Commerce was granted the power to allocate funds based on companies' willingness to sustain research, build facilities, and train new workers.
For semiconductor and telecommunications purposes, the CHIPS Act designates roughly $106 billion. The CHIPS Act includes $39 billion in tax benefits, loan guarantees and grants, administered by the DOC to encourage American companies to build new chip manufacturing plants in the U.S. Additionally, $11 billion would go toward advanced semiconductor research and development, separable into $8.5 billion of that total going to the National Institute for Standards and Technology, $500 million to Manufacturing USA, and $2 billion to a new public research hub called the National Semiconductor Technology Center. $24 billion would go to a new 25 percent advanced semiconductor manufacturing tax credit to encourage firms to stay in the United States, and $200 million would go to the National Science Foundation to resolve short-term labor supply issues.
According to McKinsey, "The CHIPS Act allocates $2 billion to the Department of Defense to fund microelectronics research, fabrication, and workforce training. An additional $500 million goes to the Department of State to coordinate with foreign-government partners on semiconductor supply chain security. And $1.5 billion funds the USA Telecommunications Act of 2020, which aims to enhance competitiveness of software and hardware supply chains of open RAN 5G networks." (The open RAN research innovation fund is controlled by the National Telecommunications and Information Administration.) Companies are subjected to a ten-year ban prohibiting them from producing chips more advanced than 28 nanometers in China and Russia if they are awarded subsidies under the law.
The law authorizes $174 billion for uses other than semiconductor and telecom technologies. It authorizes, but does not appropriate, extended NASA funding for the International Space Station to 2030, partially funds the Artemis program returning humans to the Moon, and directs NASA to establish a Moon to Mars Program Office for a human mission to Mars beyond the Artemis program. The law also obligates NASA to perform research into further domesticating its supply chains and diversifying and developing its workforce, reducing the environmental effects of aviation, integrating unmanned aerial vehicle detection with air traffic control, investigating nuclear propulsion for spacecraft, continuing the search for extraterrestrial intelligence and xenology efforts, and boosting astronomical surveys for Near-Earth objects including the NEO Surveyor project.
The law could potentially invest $67 billion in accelerating advanced zero-emissions technologies (such as improved energy storage, hydrogen economy technologies, and carbon capture and storage) to mass markets, advancing building efficiency, and improving climate science research, according to the climate action think tank Rocky Mountain Institute. The law would invest $81 billion in the NSF, including new money for STEM education (it recommends $100 million in rural schools, a 50 percent increase in Noyce Teaching Scholarships, and $300 million in a "STEM Teacher Corps") and defense against foreign intellectual property infringement, and $20 billion in the new Directorate for Technology, Innovation, and Partnerships, which would be tasked with deploying the above technologies as well as promoting social and ethical considerations, and authorizes but does not appropriate $12 billion for ARPA-E. For the United States Department of Energy the law creates a new 501(c)(3) organization, the Foundation for Energy Security and Innovation, to leverage philanthropy for improving the workforce and bolstering energy research. It contains annual DOE budget increases for other purposes including supercomputer, nuclear fusion and particle accelerator research as well as minority-serving institution outreach and workforce development for teachers, and directs the DOC to establish $10 billion worth of research hubs in post-industrial rural and urban communities that have been subjected to historical underinvestment.
As a national security law, the law contains a variety of provisions related to research ethics, foreign talent recruitment, restrictions on Confucius Institutes, and establishing new research security initiatives in the DOE, NIST, and the NSF.
The law makes extensive recommendations to the NSF to add social, legal, and ethical considerations to the award process in all of its research activities, hinting at an embrace of public participatory technology assessment; the law does not invoke an NSF doctrine called the "broader impacts criterion" to do so. The law invests roughly $90 billion in strengthening and diversifying the STEM workforce through 33 programs, many of them incorporated deeply in the aforementioned semiconductor incentive, NSF labor supply, Tech Hubs, and DoD microelectronics R&D efforts; beyond those, the law authorizes $2.8 billion for standalone education projects, creates a Chief Diversity Officer position and codifies the Eddie Bernice Johnson INCLUDES Network to serve as the NSF's main diversity, equity, and inclusion program. The law expands NSF demographic data collection and workplace inclusion efforts, and help to grantees in caregiver roles and the fight against sexual harassment. The law emphasizes skilled technical jobs that do not require a bachelor's degree, and directs grant applicants to closely integrate workforce initiatives with job training; notably, it does not invest in the United States Department of Labor to carry this out.
== Passage ==
Every senator in the Senate Democratic Caucus except for Bernie Sanders voted in favor of passing the CHIPS Act, and they were joined by seventeen Republican senators, including Senate Republican leader Mitch McConnell, Utah senator Mitt Romney, and South Carolina senator Lindsey Graham.
== Reception ==
=== Support ===
Many legislators and elected officials from across both the federal government and various state governments endorsed the passage. A large group of governors consisting of Pennsylvania's Tom Wolf, Alabama's Kay Ivey, California's Gavin Newsom, Kentucky's Andy Beshear, Michigan's Gretchen Whitmer, Wisconsin's Tony Evers, Illinois' J. B. Pritzker, Kansas' Laura Kelly, and North Carolina's Roy Cooper pushed for the passage of the bill back in November 2021.
Separately, Ohio governor Mike DeWine, whose state became the home of Intel's newest semiconductor fabrication plant in the Columbus suburb of New Albany, as well as Texas governor Greg Abbott and Texas senator John Cornyn, whose state was the home of a major investment from Samsung, each pushed for the bill to be passed and applauded its advancement through Congress. It has received widespread support from chip firms, though they were concerned about the provision banning them from further investments in China.
Intel CEO Pat Gelsinger said during an earnings call on September 30, 2022, that CHIPS Act subsidies were leading the company to explore building empty fab buildings (known as a "shell-first strategy") and aggressively acquire smaller competitors before installing any equipment, to avoid contributing to a predicted semiconductor glut.
=== Opposition ===
The bill was criticized by Republican House leader Kevin McCarthy and senator Bernie Sanders as a "blank check", which the latter equated to a bribe to semiconductor companies. China lobbied against the bill and criticized it as being "reminiscent of a 'Cold War mentality'".
=== Concerns of protectionism ===
In a piece for the Brookings Institution on December 20, 2022, Sarah Kreps and Paul Timmers expressed concerns regarding the protectionist provisions of the CHIPS and Science Act and the risk of a subsidy race with the EU, which proposed its own European Chips Act in 2022.
=== Concerns of poor workforce development ===
In a piece for Brookings on May 25, 2023, Annelies Goger and Banu Ozkazanc-Pan found the Act was vague in many of its workforce development provisions, and criticized the statute for failing to offer a comprehensive, 'wraparound' approach to workforce development. They focused on its lack of supportive provisions for closing racial and gender gaps in STEM, its lack of requirements for equitable access to child care and non-academic mentorship programs beyond well-resourced communities, and its piecemeal approach to the innovation cycle. Seven months later, Brookings staffers Martha Ross and Mark Muro also said the act's workforce provisions reflected a fragmented approach and their costs were difficult to determine.
=== Environmental concerns ===
Writing in the Substack climate and finance newsletter The Gigaton, Stanford MBA students Georgia Carroll and Zac Maslia criticized the Act for lacking incentives to add renewable energy to chipmakers' base loads, and reclaimed water and PFAS alternatives to their material inputs, and noted the extensive environmental impact of the chipmaker and data center industry was at odds with the output from the new research programs of the Act.
=== Concerns of inaction on unions, stock buybacks ===
Robert Kuttner, economic nationalist commentator and editor of The American Prospect, expressed concerns that the bill did not provide enough resources to allow local residents near fabs to organize or form a trade union (thereby making unions rely too heavily on community benefits agreements compared to federal policy), that the Commerce Department would be too friendly to states with right-to-work laws (where the first new fabs would be built), that the bill did not restrictively define a "domestic company" regarding financing, and that fab owners would simply use CHIPS Act money to buy back stocks. In response to these concerns, on February 28, 2023, United States Secretary of Commerce Gina Raimondo published the first application for CHIPS Act grants, which encourages fab operators to use Project Labor Agreements for facilitating union negotiations during construction, outline their plans to curtail stock buybacks, share excess profits with the federal government, and open or point out nearby child care facilities. The application led to over 200 statements of interest from private companies within the first month and a half, looking to invest across the entire semiconductor supply chain in 35 states; by June 2023, the number had reached over 300. The Prospect later covered the lack of progress in PLA talks between key investor TSMC and local unions in Phoenix, and included both author Lee Harris's claim that the Raimondo guidance was insufficient in helping the talks, and liberal commentator Ezra Klein's criticism of the Raimondo guidance as excessive. Harris later reported that as a consequence TSMC and its non-union subcontractors had routinely engaged in alleged wage theft, underreported safety violations, and cut out various installation procedures that would have prevented costly repairs, delaying its projects.
=== Antitrust concerns ===
In February 2024, the antitrust think tank American Economic Liberties Project released a report evaluating the state of the semiconductor industry after the CHIPS and Science Act passed. It found that the Act was insufficient in dealing with what it saw as the effective monopolization and monopsony of the American semiconductor industry by TSMC and by 'fabless' semiconductor firms that practiced routine outsourcing, such as Nvidia and Apple Inc., the result of shareholder-driven decisions. It also found the Act was insufficient in shoring up American mid-level, consumer market-oriented manufacturing by increasing competition and resiliency there. It recommended that the Commerce Department increasingly involve the Federal Trade Commission and other antitrust agencies in its decision-making, incubate four mid-size competitors to TSMC, require 'fabless' firms to double their source numbers, and strategically levy tariffs and fees on select consumer electronics deemed lacking in American sourcing.
== Impact ==
=== Science impact ===
In August 2023, around the one-year anniversary of the act becoming law, the NSF released a fact sheet outlining what it had done in the first year. Notably, the Technology, Innovation and Partnerships Directorate had awarded more than 760 grants and signed 18 contracts in research and development, and incentivized $4 billion in private capital and 35 exits from federal seed funding for private companies; the NSF issued two letters to employees on research security, increased STEM scholarship amounts, and created a National Secure Data Service per the act's directives. The DOE also issued a press release to commemorate the anniversary, noting materials science, quantum computing and biotechnology had received major attention from the act, as well as efforts to improve energy use, materials sourcing transparency and recycling of computer chips.
On the second anniversary of the Act becoming law, the NSF put out an updated fact sheet. The TIP Directorate had now awarded a two-year total of 2,455 grants and signed 25 contracts in research and development, and incentivized $8.15 billion in private capital and more than 75 exits from federal seed funding; the NSF also designated 10 new Regional Innovation Engines in January 2024, issued the first 40 awards in the ExLENT program promoting experiential learning in semiconductor engineering at universities, launched the NSF SBIR/SBTT Fast-Track pilot program for certain startups and the APTO program promoting technology prediction, and signed a memorandum of understanding with the Commerce Department for further action in workforce development.
In September 2024, the National Academies of Sciences, Engineering, and Medicine produced a report on NASA's organizational efficiency mandated by the law, which found several critical weaknesses, namely, in long-term planning, workforce retention, headquarters staffing levels, budgetary support from Congress, aging infrastructure, and emphasis on research and development as part of instrument planning.
=== Project announcements ===
Many companies and ecosystem suppliers have announced investment plans since May 2020, when TSMC announced that it would build a fab in Arizona, which upon completion began producing Apple A16 chips in earnest in mid-September 2024, according to independent journalist Tim Culpan, achieving 4 percent higher production yields than the average in Taiwan by late October.
These include (before the act passed on August 9, 2022):
In July 2021, GlobalFoundries announced plans to build a new $1 billion fab in Upstate New York.
In November 2021, Samsung announced plans to build a $17 billion semiconductor factory to begin operations in the second half of 2024. It is the largest foreign direct investment ever in the state of Texas.
In January 2022, Intel announced an initial $20 billion investment that will generate 3,000 jobs, the largest investment in Ohio's history, with plans to grow to $100 billion investment in eight fabrication plants.
In May 2022, Purdue University launched the nation's first comprehensive semiconductor degrees program in anticipation of the CHIPS Act spurring the creation of jobs for 50,000 trained semiconductor engineers in the United States.
In May 2022, Texas Instruments broke ground on new 300-mm semiconductor wafer fabrication plants in Sherman, Texas, and projected its investments will reach $30 billion and create as many as 3,000 jobs.
In July 2022, SkyWater announced plans to build an advanced $1.8 billion semiconductor manufacturing facility with the government of Indiana and Purdue University to pursue CHIPS funding.
After the act passed:
In September 2022, Wolfspeed announced it will build the world's largest silicon carbide semiconductor plant in Chatham County, North Carolina. By 2030, the company expects to occupy more than one million square feet of manufacturing space across 445 acres, at a cost of $1.3 billion. The first phase of development is supported by about $1 billion in incentives from state, county, and local governments, and the company intends to apply for CHIPS act money.
In October 2022, Micron Technology announced it will invest $20 billion in a new chip factory in Clay, New York, to take advantages of the subsidies in the Act and signaled it could expand its investments to $100 billion over 20 years. The state of New York granted the company $5.5 billion in tax credits as an incentive to move there, if it meets employment promises.
In December 2022, TSMC announced the opening of the company's second chip plant in Arizona, raising its investments in the state from $12 billion to $40 billion. At that time, company officials said that construction costs in the U.S. were four to five times those in Taiwan (due to alleged higher costs of labor, red tape, and training) and that they were having difficulty finding qualified personnel (so some U.S. hires were sent for training in Taiwan for 12–18 months), so it will cost at least 50% more to make a TSMC chip in the United States than in Taiwan. It was also reported that the project faced significant delays due to TSMC engaging in routine wage theft and not hiring unionized subcontractors to carry out pipe-fitting and other construction work properly, among other issues such as withholding necessary skills training; while in January 2024, TSMC said it had delayed the opening from 2026 to 2028 in order to evaluate the Biden administration's shifting approach to tax credits, in April 2024, multiple TSMC employees, including trainees, also attested to the deep workplace cultural differences between Taiwanese and American engineers as a key factor in these delays.
In February 2023, Texas Instruments announced an $11 billion investment in a new 300-mm wafer fab in Lehi, Utah.
In February 2023, Integra Technologies announced a $1.8 billion proposal for expanding their Outsourced Semiconductor Assembly and Test (OSAT) operation in Wichita, Kansas.
In February 2023, EMP Shield announced a $1.9 billion proposal for a new campus in Burlington, Kansas.
In April 2023, Bosch announced it was acquiring TSI Semiconductors and investing $1.5 billion in upgrades geared toward making silicon carbide chips at the TSI plant in Roseville, California.
In June 2023, the French company Mersen, a subsidiary of Le Carbone Lorraine, announced it would spend $81 million on an expansion project in Bay City and Greenville, Michigan due to Michigan's state implementation of the CHIPS Act.
The following projects were announced after the Act's first anniversary:
In November 2023, Amkor Technology announced they would apply for CHIPS Act funding to build a $2 billion chip packaging and testing facility in Peoria, Arizona, motivated by their work with Apple and TSMC.
In December 2023, BAE Systems announced they had received $35 million in national security-related grants from the Act to upgrade their Nashua, New Hampshire plant.
In January 2024, Microchip Technology announced they had received $162 million in similar grants to upgrade their Gresham, Oregon and Colorado Springs, Colorado plants.
In February 2024, GlobalFoundries announced they had received $1.5 billion in similar grants to build a new fab in Malta, New York and upgrade their Essex Junction, Vermont plant.
In March 2024, Intel announced they had received $8.5 billion from the Act to build four new highly advanced semiconductor fabs in Chandler, Arizona and New Albany, Ohio and upgrade plants in Hillsboro, Oregon and Rio Rancho, New Mexico.
In April 2024, TSMC announced they had received $6.6 billion to build a third fab in Arizona, with the intent to host the 2 nm process, and construction slated to begin in 2028. The grant was finalized on November 15.
In April 2024, Samsung announced they had received $6.4 billion in grants from the Act to invest in additional capacity at its new Texas factory site, which had been revealed to be located in Taylor, and at their existing factory in nearby Austin.
In April 2024, Micron Technologies announced a federal CHIPS and Science Act grant of $6.1 billion toward building a new semiconductor chip manufacturing campus in Clay, New York, a northern suburb of Syracuse in Upstate New York, along with a new leading-edge fab in Boise, Idaho; it also announced it would progress its worldwide investments by $100 billion.
In May 2024, the Biden administration and Polar Semiconductor agreed to establish a new foundry creating 160 new jobs in Bloomington, Minnesota using $120 million in CHIPS Act funding.
In May 2024, the administration and SK Group subsidiary Absolics announced an agreement to build glass wafers in a new factory creating 1,200 new jobs in Covington, Georgia using $75 million in CHIPS Act money.
In June 2024, the administration and Rocket Lab announced an agreement to expand production of solar cells in Albuquerque, New Mexico using $23.9 million in CHIPS Act money.
In January 2025, the Department of Commerce announced a $325 million award under the Act to Hemlock Semiconductor to help build a new polysilicon crystal factory in Hemlock, Michigan.
On May 13, 2024, Bloomberg News found a total of $32.8 billion had been allocated from the CaSA's $39 billion fund, with federal loans and tax credits set to reach $75 billion. Boston Consulting Group and the Semiconductor Industry Association estimated that by 2033, the United States would attain 28 percent of the world's market for advanced logic chips, and its share of the world's fabs would grow to 14 percent of the total (compared to a baseline scenario of 8 percent if the Act had not passed).
=== Tech Hubs ===
On October 23, 2023, the Biden administration announced that it directed the Economic Development Administration to focus on 31 areas (across 32 states and Puerto Rico) that it designated "Tech Hubs", for the purposes of spreading development evenly around the country, and incubating advanced technology and research. The Tech Hubs' organizers competed for a total of about $500 million in implementation grants, the first such appropriation out of a budgeted $10 billion over the next five years. The Biden administration also gave out "Strategy Development Grants" to 29 consortia of businesses, labor unions and governments in areas that lost out, encouraging further organizational improvements before trying again to become a Tech Hub.
On July 2, 2024, the Biden administration announced that it would award $504 million in additional grants to 12 of the Tech Hubs to further their research. It also announced that the Tech Hub program had already attracted $4 billion in private sector investments.
=== Macroeconomic impact ===
Estimates of the results of the CHIPS Act vary. The trade group Semiconductor Industry Association, which analyzed announced investments from May 2020 to December 2022, claimed the CHIPS Act had led to more than 50 projects worth more than $200 billion that would create 44,000 jobs. By the count of policy researcher Jack Conness, the CHIPS Act led to 37 projects worth $272 billion and a predicted 36,300 jobs as of November 14, 2024; when considered together with Inflation Reduction Act investments, the total comes out to 218 projects worth $388 billion creating 135,800 jobs.
Arizona is in line for the largest individual investment (TSMC's $65 billion investment, predicted to create 6,000 jobs), the most total jobs created (above 11,000) and the most dollars overall ($97.5 billion). Counties that voted for Biden in 2020 received more dollars from the Act ($227.9 million) than counties that voted for Donald Trump ($44 million).
In December 2023, the Financial Times found the IRA and CaSA together catalyzed over $224 billion in investments and over 100,000 new jobs by the preceding July.
According to the New Democrat-linked think tank Center for American Progress, the CHIPS and Science Act, the Inflation Reduction Act, and the Infrastructure Investment and Jobs Act have together led to more than 35,000 public and private investments. The Biden administration itself claimed that as of January 10, 2025, the IIJA, CaSA, and IRA together catalyzed $1 trillion in private investment (including $449 billion in electronics and semiconductors, $184 billion in electric vehicles and batteries, $215 billion in clean power, $93 billion in clean energy tech manufacturing and infrastructure, and $51 billion in heavy industry) and over $756.2 billion in public infrastructure spending (including $99 billion in energy aside from tax credits in the IRA).
=== California ===
==== Manufacturing ====
In California, where the semiconductor industry was founded in Silicon Valley, experts say that it is very unlikely that any new manufacturing facilities will be built, due to tight regulations, high costs of land and electricity, and unreliable water supplies. These factors have contributed to the state's 33% decline in manufacturing jobs since 1990.
==== Research ====
In May 2023, Applied Materials announced it would build a new collaborative advanced research and development center (distinct from traditional fabs) named the "EPIC Center", short for "Equipment and Process Innovation and Commercialization Center", by 2026, next to its existing facility in Sunnyvale, California. The first known CHIPS Act-linked investment in Silicon Valley, the EPIC Center is worth $4 billion and is projected to create 2,000 jobs.
== Implementation ==
=== Underfunding of research agencies ===
In June 2023, after the passage of the debt-ceiling deal, Federation of American Scientists analysts Matt Hourihan and Melissa Roberts Chapman and Brookings Institution analyst Mark Muro noted that the Consolidated Appropriations Act, 2023 had underfunded three key agencies to the Science Act (the NSF, the DOE's Office of Science, and NIST) by $2.7 billion, or 12 percent compared to the Act's intent, and that the President's proposal for the 2024 United States federal budget would likely shortchange them by $5.1 billion, or 19 percent compared to the Act's intent. Upon reviewing the effects the shortfalls would bring on defense policy and the economy, they recommended that more science and technology spending be moved into the mandatory category, as had been done with some semiconductor spending.
In March 2024, Politico contributor Christine Mui cited Hourihan in detailing how the Science Act interacted with later spending deals. In the actual 2024 budget, the NSF was underfunded by 42 percent compared to the Act's authorization and by 11 percent compared to its budget request; the Department of Energy's Office of Science was underfunded by 13 percent compared to the Act's authorization, while the Economic Development Administration's regional hubs program was funded with $41 million ($541 million since 2022) against an annual authorization of $2 billion ($4 billion from 2022); NIST's budget, for which the 2023 Appropriations Act appropriated $1.564 billion and the Science Act authorized $1.562 billion, saw an 11 percent cut and NASA's budget fell 9 percent short of its request. As of April 2024, CHIPS research agencies have been underfunded by over $8 billion.
In April, Commerce Secretary Raimondo revealed the CHIPS Program Office would no longer fund commercial research and development investments via the Act's $39 billion fund, due to high demand totaling $70 billion, and said applicants must seek other sources of R&D funding.
=== National Semiconductor Technology Center ===
The Act creates a National Semiconductor Technology Center to perform advanced research and development on semiconductors. In order to implement it, the Department of Commerce created a nonprofit public–private partnership within NIST called Natcast in April 2023, putting out a call for volunteers to select who will serve as board members. In June, the selection committee was announced as Janet Foutty of Deloitte, John L. Hennessy of Alphabet, Jason Gaverick Matheny of RAND Corporation, Don Rosenberg of the University of California, San Diego, and Brenda Darden Wilkerson of AnitaB.org. In September, the selection committee's activities were closed. By the White House's announcement date, the board of trustees was finalized as Robin Abrams of Analog Devices Inc., Craig Barrett of Intel, Reggie Brothers of the MIT Lincoln Lab, Nick Donofrio of IBM, Donna Dubinsky of Palm and Handspring, and Erica Fuchs of Carnegie Mellon University. They selected Deirdre Hanford of Synopsys to serve as Natcast's CEO. As of October 24, 2024, Natcast was promised at least $5 billion from the Biden administration, and has established a Workforce Center of Excellence and "Community of Interest", beginning its first $100 million grant competition in the summer, with a focus on improving artificial intelligence and making cutting-edge research cheaper. It has prepared its strategic plan for fiscal years 2025-27, outlining goals that range from scaling up multi-process wafer access to computer-aided design of chips to organizing the Workforce Center of Excellence.
The current headquarters of Natcast are in a strip mall in Portola Valley, California. States that have received huge amounts of semiconductor investments such as New York, Ohio, Arizona and Texas are vying as of May 2024 to have the headquarters relocated in them. In October, the first flagship NSTC site was announced, an extreme ultraviolet lithography research lab at the Albany Nanotech Complex in Albany, New York. The second flagship site was announced the next day as a chip design lab in Sunnyvale, California. In January 2025, the third flagship site, a lab for chip packaging, was announced as Arizona State University Research Park in Tempe.
Arrian Ebrahaimi and Jordan Schneider, writing for the Institute for Progress, recommended the NSTC be structured with more centralization, work quickly and ambitiously to address market failures and externalities in chip research, and follow the management model of the similar Belgian company IMEC.
=== Metrology, packaging and digital twins initiatives ===
The Biden administration also invested at least $200 million in a new Manufacturing USA Institute under the Act, focused on spreading the use of digital twins in semiconductor design, and $300 million in the NIST Advanced Packaging Manufacturing Program, focused on researching new substrate chemistries for semiconductor packaging. The Commerce Department also awarded $100 million to 29 research projects in advanced metrology by February, and released a new notice of opportunity for metrology research funding on April 16.
=== Rule on business deals with countries of concern ===
In September 2023, the Commerce Department finalized its rule prohibiting Act funding recipients from expanding their manufacturing presence by more than 5 percent for advanced and 10 percent for mid-market chips through deals worth $100,000 or more, and brokering licensing agreements for technology transfers in China and other "countries of concern", as well as setting out how the Secretary would be notified of violations.
=== Stock buybacks and economic equality ===
In October 2022, Senators Elizabeth Warren and Tammy Baldwin and Representatives Sean Casten, Jamaal Bowman, Pramila Jayapal and Bill Foster sent a letter to Secretary Raimondo urging her to detail how the Commerce Department would enforce the law's provisions preventing companies from using CHIPS Act money directly on stock buybacks (they noted the law does not prevent recipients from using the money to free up their own funds for stock buybacks), as well as whether the department would claw back misused funds and resolve conflicts of interest. On February 10, 2023, they and Senators Bernie Sanders and Ed Markey repeated many of the same points to Michael Schmidt, head of the Department's CHIPS Program Office, and urged even stronger action, outlining what regulatory crackdowns the law authorizes the department to do.
In January 2024, Warren and Jayapal wrote to Secretary Raimondo, Schmidt, and CHIPS Program Office investment head Todd Fisher expressing their concerns over who was staffing the main funds allocator, which reporting from The Wall Street Journal and Bloomberg News the previous summer and fall had found to be a small collection of elite bankers, consultants and lobbyists from Wall Street firms with potential conflicts of interest.
At the time BAE Systems was announced to be receiving a CHIPS Act grant, Warren and Casten wrote to CEO Tom Arsenault that they wanted BAE Systems to conform with the spirit of the Act, noting that BAE had engaged in $9.4 billion in stock buybacks the previous year. Journalist Les Leopold later cited the letter and Senator Chris Van Hollen's statements to denounce Intel's engagement in similar practices netting them nearly $153 billion since 1990 and their recent mass layoffs, following the $8.5 billion grant receipt announcement.
=== Grant delays ===
As of January 2024 only two small grants had been awarded, neither for production of the most advanced chips.
One hurdle delaying the release of award monies is the National Environmental Policy Act, which requires that projects receive federal approvals before any funds can be dispersed. A federal government analysis cited by The Wall Street Journal found that these approvals, from 2013 to 2018, have taken an average of 4.5 years to receive.
In April 2024, Commerce Secretary Gina Raimondo told CNBC at Samsung's grant announcement on the Taylor fab site that she expected all CHIPS Act grant money to be awarded by the end of the year, with most of the remaining funding going to equipment suppliers, wafer makers, and chemical engineering firms. However, by mid-November 2024, only Polar Semiconductor and TSMC's grant deals had been finalized by the Biden administration. This changed by the end of the month, as the Biden administration finalized its fifth and sixth grant agreements, with GlobalFoundries for its New York and Vermont projects, and with Intel for its Arizona, New Mexico, Ohio, and Oregon projects.
=== Shortage of skilled workers ===
The US lacks the workforce required for fulfilling fab projects, with one study estimating a need of 300,000 additional skilled workers just to complete ongoing fab projects, not including new projects. Comparatively, the number of US students pursuing relevant degrees has been stagnant for 30 years, while international students face difficulties in staying to work. Plants planned by both TSMC and Intel have reportedly been struggling to find qualified workers. Even after completion, in the operational/manufacturing stage, 40% of the permanent new workers will need two-year technician degrees and 60% will need four-year engineering degrees or higher.
In Arizona, local unions clashed with TSMC after it reported that fab construction in Arizona was running behind schedule due to "an insufficient amount of skilled workers" with the expertise needed to install specialized equipment. TSMC planned to send experienced Taiwanese technicians to train local workers, which local unions characterized as "a lack of respect for American workers". The Arizona Building and Construction Trades Council subsequently asked Congress to block visas for 500 Taiwanese workers. TSMC reported that due to issues with labor, its investment in the first Arizona fab is expected to be delayed into 2025, with the second fab delayed from 2026 to 2027. (A third fab intended for hosting the 2 nm process would be announced in April 2024, though construction would not start until 2028.) In contrast, in February 2024 TSMC completed construction of its first fab in Japan, located in the Kumamoto region, in 20 months, by running 24-hour shifts, helped by the Japanese government and locals being welcoming to the influx of skilled Taiwanese workers needed for the project.
Intel similarly experienced delays from labor issues, with its planned Ohio fab expected to be delayed into 2026 due to a lack of skilled workers, as well as delays in grant funding.
=== Labor relations results ===
On December 6, 2023, the Arizona Building Trades Council and TSMC announced a deal to ensure a union-run workforce development program, improvements to transparency, and increased communications with the company's Taiwanese management, would proceed at the Arizona site.
In August 2024, the Prospect reported on several effects of the CaSA on unionization, neutrality on which the CaSA does not require of grant recipients; more specifically, it covered Secretary Raimondo's lack of enforcement of the Commerce Department's Good Jobs Principles. Workers at an Analog Devices, Inc. fab in Beaverton, Oregon protested unsafe working conditions the previous June, and are lobbying Oregon state legislators to add unionization neutrality provisions to their state-level version of the Act. Microchip Technology Vice President Dan Malinaric was recorded in July as pressuring workers to not form a union, a violation of the Wagner Act. The Communications Workers of America union was only able to reach community benefits agreements with a select few firms benefiting from the Act, including Akash Systems.
=== Secure enclave issue ===
In March 2024, Bloomberg News reported that Intel was poised to receive $3.5 billion from the CaSA in the year's federal budget (specifically the second 'minibus') as part of a "secure enclave" program which Intel claimed would help facilitate national security through carrying out United States Department of Defense contracts with high levels of secrecy. Citing interviews of Charles Wessner of the Center for Strategic and International Studies and key congressional aides, and a risk assessment report from the United States Department of the Air Force, Austin Ahlman of the antitrust think tank Open Markets Institute criticized the plan, not least because it would take up more than 10 percent of the $39 billion in grants the Act designates for domestic semiconductors, as well as increase concentration in the domestic semiconductor industry. GlobalFoundries executives also criticized the plan. The DoD later withdrew its $2.5 billion contribution to the secure enclave plan and gave it to the Commerce Department, which allowed Intel to finalize the funding agreement on September 16, 2024, amid concerns of its shaky financial performance and lagging customer outreach. The funding agreement resulted in a reduction of Intel's later grant from an announced $8.5 billion to $7.86 billion in November.
== International collaboration ==
=== State Department funds ===
The State Department has awarded $200 million in partnerships to academia and foreign companies as of July 2024 under the Act's International Technology Security and Innovation Fund. The State Department has partnered with the governments of Costa Rica, Panama, Vietnam, Indonesia, the Philippines and Mexico to distribute these funds, for technology incubation purposes.
=== India-US defense fab partnership ===
In order to manufacture chips for national security needs, the US military has partnered with Indian startups to establish a semiconductor fabrication plant in India. With assistance from the India Semiconductor Mission and a strategic technology cooperation between the United States Space Force (USSF), Bharat Semi, and 3rdiTech, the fabrication plant would produce silicon carbide, infrared, and gallium nitride chips. The plant will prioritize supplying the high-voltage power electronics, advanced communications, and advanced sensors that are the three fundamental foundations of modern warfare. The chips will also be utilized in data centers, communications infrastructure, green energy systems, and railroads. It will support the development of a reliable and robust supply chain in the crucial area of national security.
The two-way cooperation is part of the CHIPS and Science Act and United States–India Initiative on Critical and Emerging Technology. In order to design and develop military-grade semiconductor for night vision devices, missile guidance, space sensors, drones, fighter aircraft, electric vehicles, military communications, radars, and jammers, the collaboration involves setting up design hubs, testing centers, centers of excellence, and two fabrication units. The project will receive a 50% capital expenditure subsidy from the India Semiconductor Mission.
To address the defense demands of the United States and its allies, the Shakti Semiconductor Fab will acquire complete expertise in the development of compound semiconductors. The factory will begin phase one production in 2027, with an annual target of 50,000 units. The establishment of the facility would cost $500 million in investments. General Atomics is 3rdiTech's technology validation partner. The company has worked under the DoD and the Ministry of Defence in the United Kingdom.
Designed for national security, the fabrication plant is the first multi-material fabrication facility in the world. On September 21, 2024, at a bilateral meeting between President Joe Biden and Prime Minister Narendra Modi in Delaware, the blueprint for the Bharat Semi Fab was revealed. The strategic significance of this project is further enhanced by the fact that it represents the US Space Force's first-ever international technology partnership.
== Follow-up environmental bill ==
On July 11, 2023, after complaints from semiconductor lobbyists on CHIPS Act-related permitting issues, Senators Mark Kelly, Sherrod Brown, Todd Young, Ted Cruz, and Bill Hagerty introduced the Building Chips in America Act, which would designate the Commerce Department the lead agency for major fab projects, limit the scope of NEPA reviews for certain fab projects, and cut judicial review times for them. The Senate passed the bill once on July 28, 2023 and again in December 2023. The House passed the companion bill on September 23, 2024, with a vote of 275–125. Amid protests from Zoe Lofgren, the Sierra Club, Center for Biological Diversity, CHIPS Communities United, and two dozen other environmental groups, President Biden signed the bill into law on October 2.
== Subsequent development ==
During Donald Trump's 2025 speech to a joint session of Congress, the president asked House Speaker Mike Johnson to “get rid” of the subject act.
== See also ==
America COMPETES Act of 2022 – original House version
European Chips Act
Infrastructure Investment and Jobs Act and Inflation Reduction Act of 2022 – other major acts in industrial policy signed by Biden
Technology policy
Technology education
Techno-nationalism
United States Innovation and Competition Act – original Senate version
Artificial Intelligence Cold War
National Artificial Intelligence Initiative Act of 2020
== References ==
== External links ==
Official CHIPS website on nist.gov (also known as chips.gov)
List of CHIPS webinars
Chips and Science Act bill:
H.R.4346 – Chips and Science Act bill information on congress.gov
CHIPS and Science Act as amended (PDF/details) in the GPO Statute Compilations collection
CHIPS and Science Act as enacted (PDF/details) in the US Statutes at Large | Wikipedia/CHIPS_and_Science_Act |
Industrial processes are procedures involving chemical, physical, electrical, or mechanical steps to aid in the manufacturing of an item or items, usually carried out on a very large scale. Industrial processes are the key components of heavy industry.
== Chemical processes by main basic material ==
Certain chemical process yield important basic materials for society, e.g., (cement, steel, aluminum, and fertilizer). However, these chemical reactions contribute to climate change by emitting carbon dioxide, a greenhouse gas, through chemical reactions, as well as through the combustion of fossil fuels to generate the high temperatures needed to reach the activation energies of the chemical reactions.
=== Cement (the paste within concrete) ===
Calcination – Limestone, which is largely composed of fossilized calcium carbonate (CaCO3), breaks down at high temperatures into useable calcium oxide (CaO) and carbon dioxide gas (CO2), which gets released as a by-product. This chemical reaction, called calcination, figures most prominently in creating cement (the paste within concrete). The reaction is also important in providing calcium oxide to act as a chemical flux (removal of impurities) within a blast furnace.
CaCO3(s) → CaO(s) + CO2(g)
=== Steel ===
Smelting – Inside a blast furnace, carbon monoxide (CO) is released by combusting coke (a high-carbon derivative of coal) and removes the undesired oxygen (O) within ores. CO2 is released as a by-product, carrying away the oxygen and leaving behind the desired pure metal. Most prominently, iron smelting is how steel (largely iron with small amounts of carbon) is created from mined iron ore and coal.
Fe2O3(s) + 3 CO(g) → 2 Fe(s) + 3 CO2(g)
=== Aluminium ===
Hall–Héroult process – Aluminium oxide (Al2O3) is smelted with coke (C) in a high-temperature electrolysis reaction, yielding the desired pure aluminium (Al) and a mixture of CO and CO2.
Al2O3(s) + 3 C(s) → 2 Al(s) + 3 CO(g)
2 Al2O3(s) + 3 C(s) → 4 Al(s) + 3 CO2(g)
=== Fertilizer ===
Haber process – Atmospheric nitrogen (N2) is separated, yielding ammonia (NH3), which is used to make all synthetic fertilizer. The Haber process uses a fossil carbon source, generally natural gas, to provide the CO for the water–gas shift reaction, yielding hydrogen (H2) and releasing CO2. The H2 is used to break the strong triple bond in N2, yielding industrial ammonia.
CH4(g) + H2O(g) → CO(g) + 3 H2(g)
CO(g) + H2O(g) → H2(g) + CO2(g)
N2(g) + 3 H2(g) → 2 NH3(g)
=== Other chemical processes ===
Disinfection – chemical treatment to kill bacteria and viruses
Pyroprocessing – using heat to chemically combine materials, such as in cement
== Electrolysis ==
The availability of electricity and its effect on materials gave rise to several processes for plating or separating metals.
Electrolytic process – any process using electrolysis
Electrophoretic deposition – electrolytic deposition of colloidal particles in a liquid medium
Electropolishing – the reverse of electroplating
Electrotyping – using electroplating to produce printing plates
Gilding, electroplating, anodizing, electrowinning – depositing a material on an electrode
Isoelectric focusing a.k.a. electrofocusing – similar to electroplating, but separating molecules
Metallizing, plating, spin coating – the generic terms for giving non-metals a metallic coating
== Cutting ==
Electrical discharge machining (EDM)
Laser cutting
Machining – the mechanical cutting and shaping of metal which involves the loss of the material
Oxy-fuel welding and cutting
Plasma cutting
Sawing
Shearing
Water-jet cutting – cutting materials using a very high-pressure jet of water
== Metalworking ==
Case-hardening, differential hardening, shot peening – creating a wear-resistant surface
Casting – shaping of a liquid material by pouring it into moulds and letting it solidify
Die cutting – A "forme" or "die" is pressed onto a flat material in order to cut, score, punch and otherwise shape the material
Electric arc furnace — very-high-temperature processing
Forging – the shaping of metal by use of heat and hammer
Hydroforming – a tube of metal is expanded into a mould under pressure
Precipitation hardening – heat treatment used to strengthen malleable materials
Progressive stamping – the production of components from a strip or roll
Sandblasting – cleaning of a surface using sand or other particles
Smelting and direct reduction – extracting metals from ores
Soldering, brazing, welding – a process for joining metals
Stamping
Steelmaking – turning "pig iron" from smelting into steel
Tumble polishing – for polishing
Work hardening – adding strength to metals, alloys, etc.
=== Iron and steel ===
Basic oxygen steelmaking
Bessemer process
Blast furnace – produced cast iron
Catalan forge, open hearth furnace, bloomery – produced wrought iron
Cementation process
Crucible steel
Direct reduction – produced direct reduced iron
Smelting – the process of using furnaces to produce steel, copper, etc.
== Molding ==
The physical shaping of materials by forming their liquid form using a mould
Blow molding as in plastic containers or in the glass container industry – making hollow objects by blowing them into a mould
Casting, sand casting – the shaping of molten metal or plastics using a mould
Compression molding
Sintering, powder metallurgy – the making of objects from metal or ceramic powder
== Separation ==
Many materials exist in an impure form. Purification or separation provides a usable product.
Comminution – reduces the size of physical particles (it exists between crushing and grinding)
Frasch process – for extracting molten sulfur from the ground
Froth flotation, flotation process – separating minerals through flotation
Liquid–liquid extraction – dissolving one substance in another
== Distillation ==
Distillation is the purification of volatile substances by evaporation and condensation
Batch distillation
Continuous distillation
Fractional distillation, steam distillation, vacuum distillation
Fractionating column
Spinning cone
== Additive manufacturing ==
In additive manufacturing, material is progressively added to the piece until the desired shape and size are obtained.
Fused deposition modeling (FDM)
Photolithography
Selective laser sintering (SLS)
Stereolithography (SLA)
== Petroleum and organic compounds ==
The nature of an organic molecule means it can be transformed at the molecular level to create a range of products.
Alkylation – refining of crude oil
Burton process – cracking of hydrocarbons
Cracking (chemistry) – the generic term for breaking up the larger molecules
Cumene process – making phenol and acetone from benzene
Friedel–Crafts reaction, Kolbe–Schmitt reaction
Olefin metathesis, thermal depolymerization
Oxo process – produces aldehydes from alkenes
Polymerization
Raschig hydroxylamine process – produces hydroxylamine, a precursor of nylon
Transesterification – organic chemicals
== Organized by product ==
Aluminium – ( Hall-Héroult process, Deville process, Bayer process, Wöhler process)
Ammonia, used in fertilizer – (Haber process)
Bromine – (Dow process)
Chlorine, used in chemicals – (chloralkali process, Weldon process, Hooker process)
Fat – (rendering)
Fertilizer – (nitrophosphate process)
Glass – (Pilkington process)
Gold – (bacterial oxidation, Parkes process)
Graphite – (Acheson process)
Heavy water, used to refine radioactive products – (Girdler sulfide process)
Hydrogen – (water–gas shift reaction, steam reforming)
Lead (and bismuth) – (Betts electrolytic process, Betterton-Kroll process)
Nickel – (Mond process)
Nitric acid – (Ostwald process)
Paper – (pulping, Kraft process, Fourdrinier machine)
Rubber – (vulcanization)
Salt – (Alberger process, Grainer evaporation process)
Semiconductor crystals – (Bridgman–Stockbarger method, Czochralski method)
Silver – (Patio process, Parkes process)
Silicon carbide – (Acheson process, Lely process)
Sodium carbonate, used for soap – (Leblanc process, Solvay process, Leblanc-Deacon process)
Sulfuric acid – (lead chamber process, contact process)
Titanium – (Hunter process, Kroll process)
Zirconium – (Hunter process, Kroll process, van Arkel–de Boer process)
A list by process:
Alberger process, Grainer evaporation process – produces salt from brine
Bacterial oxidation – used to produce gold
Bayer process – the extraction of aluminium from ore
Chloralkali process, Weldon process – for producing chlorine and sodium hydroxide
Dow process – produces bromine from brine
Formox process – oxidation of methanol to produce formaldehyde
Girdler sulfide process – for making heavy water
Hunter process, Kroll process – produces titanium and zirconium
Industrial rendering – the separation of fat from bone and protein
Lead chamber process, contact process – production of sulfuric acid
Mond process – nickel
Nitrophosphate process – a number of similar process for producing fertilizer
Ostwald process – produces nitric acid
Packaging
Pidgeon process – produces magnesium, reducing the oxide using silicon
Steam reforming, water gas shift reaction – produce hydrogen and carbon monoxide from methane or hydrogen and carbon dioxide from water and carbon monoxide
Vacuum metalising – a finishing process
Van Arkel–de Boer process – for producing titanium, zirconium, hafnium, vanadium, thorium, or protactinium
== See also ==
Chemical engineering
Industrial Extraction
Mass production
Multilevel Flow Modeling
Process (engineering)
== References == | Wikipedia/Industrial_processes |
The heat dissipation in integrated circuits problem has gained an increasing interest in recent years due to the miniaturization of semiconductor devices. The temperature increase becomes relevant for cases of relatively small-cross-sections wires, because such temperature increase may affect the normal behavior of semiconductor devices.
== Joule heating ==
Joule heating is a predominant heat mechanism for heat generation in integrated circuits and is an undesired effect.
== Propagation ==
The governing equation of the physics of the problem to be analyzed is the heat diffusion equation. It relates the flux of heat in space, its variation in time and the generation of power.
∇
(
κ
∇
T
)
+
g
=
ρ
C
∂
T
∂
t
{\displaystyle \nabla \left(\kappa \nabla T\right)+g=\rho C{\frac {\partial T}{\partial t}}}
Where
κ
{\displaystyle \kappa }
is the thermal conductivity,
ρ
{\displaystyle \rho }
is the density of the medium,
C
{\displaystyle C}
is the specific heat
k
=
κ
ρ
C
{\displaystyle k={\frac {\kappa }{\rho C}}\,}
the thermal diffusivity and
g
{\displaystyle g}
is the rate of heat generation per unit volume. Heat diffuses from the source following equation ([eq:diffusion]) and solution in a homogeneous medium of ([eq:diffusion]) has a Gaussian distribution.
== See also ==
Thermal simulations for integrated circuits
Thermal design power
Thermal management in electronics
== References ==
== Further reading ==
Ogrenci-Memik, Seda (2015). Heat Management in Integrated circuits: On-chip and system-level monitoring and cooling. London, United Kingdom: The Institution of Engineering and Technology. ISBN 9781849199353. OCLC 934678500. | Wikipedia/Heat_generation_in_integrated_circuits |
MEMS (micro-electromechanical systems) is the technology of microscopic devices incorporating both electronic and moving parts. MEMS are made up of components between 1 and 100 micrometres in size (i.e., 0.001 to 0.1 mm), and MEMS devices generally range in size from 20 micrometres to a millimetre (i.e., 0.02 to 1.0 mm), although components arranged in arrays (e.g., digital micromirror devices) can be more than 1000 mm2. They usually consist of a central unit that processes data (an integrated circuit chip such as microprocessor) and several components that interact with the surroundings (such as microsensors).
Because of the large surface area to volume ratio of MEMS, forces produced by ambient electromagnetism (e.g., electrostatic charges and magnetic moments), and fluid dynamics (e.g., surface tension and viscosity) are more important design considerations than with larger scale mechanical devices. MEMS technology is distinguished from molecular nanotechnology or molecular electronics in that the latter two must also consider surface chemistry.
The potential of very small machines was appreciated before the technology existed that could make them (see, for example, Richard Feynman's famous 1959 lecture There's Plenty of Room at the Bottom). MEMS became practical once they could be fabricated using modified semiconductor device fabrication technologies, normally used to make electronics. These include molding and plating, wet etching (KOH, TMAH) and dry etching (RIE and DRIE), electrical discharge machining (EDM), and other technologies capable of manufacturing small devices.
They merge at the nanoscale into nanoelectromechanical systems (NEMS) and nanotechnology.
== History ==
An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Robert A. Wickstrom for Harvey C. Nathanson in 1965. Another early example is the resonistor, an electromechanical monolithic resonator patented by Raymond J. Wilfinger between 1966 and 1971. During the 1970s to early 1980s, a number of MOSFET microsensors were developed for measuring physical, chemical, biological and environmental parameters.
The term "MEMS" was introduced in 1986. S.C. Jacobsen (PI) and J.E. Wood (Co-PI) introduced the term "MEMS" by way of a proposal to DARPA (15 July 1986), titled "Micro Electro-Mechanical Systems (MEMS)", granted to the University of Utah. The term "MEMS" was presented by way of an invited talk by S.C. Jacobsen, titled "Micro Electro-Mechanical Systems (MEMS)", at the IEEE Micro Robots and Teleoperators Workshop, Hyannis, MA Nov. 9–11, 1987. The term "MEMS" was published by way of a submitted paper by J.E. Wood, S.C. Jacobsen, and K.W. Grace, titled "SCOFSS: A Small Cantilevered Optical Fiber Servo System", in the IEEE Proceedings Micro Robots and Teleoperators Workshop, Hyannis, MA Nov. 9–11, 1987. CMOS transistors have been manufactured on top of MEMS structures.
== Types ==
There are two basic types of MEMS switch technology: capacitive and ohmic. A capacitive MEMS switch is developed using a moving plate or sensing element, which changes the capacitance. Ohmic switches are controlled by electrostatically controlled cantilevers. Ohmic MEMS switches can fail from metal fatigue of the MEMS actuator (cantilever) and contact wear, since cantilevers can deform over time.
== Materials ==
The fabrication of MEMS evolved from the process technology in semiconductor device fabrication, i.e. the basic techniques are deposition of material layers, patterning by photolithography and etching to produce the required shapes.
Silicon
Silicon is the material used to create most integrated circuits used in consumer electronics in the modern industry. The economies of scale, ready availability of inexpensive high-quality materials, and ability to incorporate electronic functionality make silicon attractive for a wide variety of MEMS applications. Silicon also has significant advantages engendered through its material properties. In single crystal form, silicon is an almost perfect Hookean material, meaning that when it is flexed there is virtually no hysteresis and hence almost no energy dissipation. As well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. Semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and MEMS in particular. Silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems.
Polymers
Even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. Polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. MEMS devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges.
Metals
Metals can also be used to create MEMS elements. While metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. Metals can be deposited by electroplating, evaporation, and sputtering processes. Commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver.
Ceramics
The nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in MEMS fabrication due to advantageous combinations of material properties. AlN crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. TiN, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic MEMS actuation schemes with ultrathin beams. Moreover, the high resistance of TiN against biocorrosion qualifies the material for applications in biogenic environments. The figure shows an electron-microscopic picture of a MEMS biosensor with a 50 nm thin bendable TiN beam above a TiN ground plate. Both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. When a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground plate and measuring the bending velocity.
== Basic processes ==
=== Deposition processes ===
One of the basic building blocks in MEMS processing is the ability to deposit thin films of material with a thickness anywhere from one micrometre to about 100 micrometres. The NEMS process is the same, although the measurement of film deposition ranges from a few nanometres to one micrometre. There are two types of deposition processes, as follows.
==== Physical deposition ====
Physical vapor deposition ("PVD") consists of a process in which a material is removed from a target, and deposited on a surface. Techniques to do this include the process of sputtering, in which an ion beam liberates atoms from a target, allowing them to move through the intervening space and deposit on the desired substrate, and evaporation, in which a material is evaporated from a target using either heat (thermal evaporation) or an electron beam (e-beam evaporation) in a vacuum system.
==== Chemical deposition ====
Chemical deposition techniques include chemical vapor deposition (CVD), in which a stream of source gas reacts on the substrate to grow the material desired. This can be further divided into categories depending on the details of the technique, for example LPCVD (low-pressure chemical vapor deposition) and PECVD (plasma-enhanced chemical vapor deposition). Oxide films can also be grown by the technique of thermal oxidation, in which the (typically silicon) wafer is exposed to oxygen and/or steam, to grow a thin surface layer of silicon dioxide.
=== Patterning ===
Patterning is the transfer of a pattern into a material.
=== Lithography ===
Lithography in a MEMS context is typically the transfer of a pattern into a photosensitive material by selective exposure to a radiation source such as light. A photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. If a photosensitive material is selectively exposed to radiation (e.g. by masking some of the radiation) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs.
This exposed region can then be removed or treated providing a mask for the underlying substrate. Photolithography is typically used with metal or other thin film deposition, wet and dry etching. Sometimes, photolithography is used to create structure without any kind of post etching. One example is SU8 based lens where SU8 based square blocks are generated. Then the photoresist is melted to form a semi-sphere which acts as a lens.
Electron beam lithography (often abbreviated as e-beam lithography) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film (called the resist), ("exposing" the resist) and of selectively removing either exposed or non-exposed regions of the resist ("developing"). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. It was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures. The primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. This form of maskless lithography has found wide usage in photomask-making used in photolithography, low-volume production of semiconductor components, and research & development. The key limitation of electron beam lithography is throughput, i.e., the very long time it takes to expose an entire silicon wafer or glass substrate. A long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. Also, the turn-around time for reworking or re-design is lengthened unnecessarily if the pattern is not being changed the second time.
It is known that focused-ion beam lithography has the capability of writing extremely fine lines (less than 50 nm line and space has been achieved) without proximity effect. However, because the writing field in ion-beam lithography is quite small, large area patterns must be created by stitching together the small fields.
Ion track technology is a deep cutting tool with a resolution limit around 8 nm applicable to radiation resistant minerals, glasses and polymers. It is capable of generating holes in thin films without any development process. Structural depth can be defined either by ion range or by material thickness. Aspect ratios up to several 104 can be reached. The technique can shape and texture materials at a defined inclination angle. Random pattern, single-ion track structures and an aimed pattern consisting of individual single tracks can be generated.
X-ray lithography is a process used in the electronic industry to selectively remove parts of a thin film. It uses X-rays to transfer a geometric pattern from a mask to a light-sensitive chemical photoresist, or simply "resist", on the substrate. A series of chemical treatments then engraves the produced pattern into the material underneath the photoresist.
Diamond patterning is a method of forming diamond MEMS. It is achieved by the lithographic application of diamond films to a substrate such as silicon. The patterns can be formed by selective deposition through a silicon dioxide mask, or by deposition followed by micromachining or focused ion beam milling.
=== Etching processes ===
There are two basic categories of etching processes: wet etching and dry etching. In the former, the material is dissolved when immersed in a chemical solution. In the latter, the material is sputtered or dissolved using reactive ions or a vapor phase etchant.
==== Wet etching ====
Wet chemical etching consists of the selective removal of material by dipping a substrate into a solution that dissolves it. The chemical nature of this etching process provides good selectivity, which means the etching rate of the target material is considerably higher than the mask material if selected carefully. Wet etching can be performed using either isotropic wet etchants or anisotropic wet etchants. Isotropic wet etchant etch in all directions of the crystalline silicon at approximately equal rates. Anisotropic wet etchants preferably etch along certain crystal planes at faster rates than other planes, thereby allowing more complicated 3-D microstructures to be implemented. Wet anisotropic etchants are often used in conjunction with boron etch stops wherein the surface of the silicon is heavily doped with boron resulting in a silicon material layer that is resistant to the wet etchants. This has been used in MEWS pressure sensor manufacturing for example.
Etching progresses at the same speed in all directions. Long and narrow holes in a mask will produce v-shaped grooves in the silicon. The surface of these grooves can be atomically smooth if the etch is carried out correctly, with dimensions and angles being extremely accurate.
Some single crystal materials, such as silicon, will have different etching rates depending on the crystallographic orientation of the substrate. This is known as anisotropic etching and one of the most common examples is the etching of silicon in KOH (potassium hydroxide), where Si <111> planes etch approximately 100 times slower than other planes (crystallographic orientations). Therefore, etching a rectangular hole in a (100)-Si wafer results in a pyramid shaped etch pit with 54.7° walls, instead of a hole with curved sidewalls as with isotropic etching.
Hydrofluoric acid is commonly used as an aqueous etchant for silicon dioxide (SiO2, also known as BOX for SOI), usually in 49% concentrated form, 5:1, 10:1 or 20:1 BOE (buffered oxide etchant) or BHF (Buffered HF). They were first used in medieval times for glass etching. It was used in IC fabrication for patterning the gate oxide until the process step was replaced by RIE. Hydrofluoric acid is considered one of the more dangerous acids in the cleanroom.
Electrochemical etching (ECE) for dopant-selective removal of silicon is a common method to automate and to selectively control etching. An active p–n diode junction is required, and either type of dopant can be the etch-resistant ("etch-stop") material. Boron is the most common etch-stop dopant. In combination with wet anisotropic etching as described above, ECE has been used successfully for controlling silicon diaphragm thickness in commercial piezoresistive silicon pressure sensors. Selectively doped regions can be created either by implantation, diffusion, or epitaxial deposition of silicon.
==== Dry etching ====
Xenon difluoride (XeF2) is a dry vapor phase isotropic etch for silicon originally applied for MEMS in 1995 at University of California, Los Angeles. Primarily used for releasing metal and dielectric structures by undercutting silicon, XeF2 has the advantage of a stiction-free release unlike wet etchants. Its etch selectivity to silicon is very high, allowing it to work with photoresist, SiO2, silicon nitride, and various metals for masking. Its reaction to silicon is "plasmaless", is purely chemical and spontaneous and is often operated in pulsed mode. Models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach.
Modern VLSI processes avoid wet etching, and use plasma etching instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic. Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching. The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching. The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride (CCl4) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal.
Ion milling, or sputter etching, uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10−3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features.
In reactive-ion etching (RIE), the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture using an RF power source, which breaks the gas molecules into ions. The ions accelerate towards, and react with, the surface of the material being etched, forming another gaseous material. This is known as the chemical part of reactive ion etching. There is also a physical part, which is similar to the sputtering deposition process. If the ions have high enough energy, they can knock atoms out of the material to be etched without a chemical reaction. It is a very complex task to develop dry etch processes that balance chemical and physical etching, since there are many parameters to adjust. By changing the balance it is possible to influence the anisotropy of the etching, since the chemical part is isotropic and the physical part highly anisotropic the combination can form sidewalls that have shapes from rounded to vertical.
Deep reactive ion etching (DRIE) is a special subclass of RIE that is growing in popularity. In this process, etch depths of hundreds of micrometers are achieved with almost vertical sidewalls. The primary technology is based on the so-called "Bosch process", named after the German company Robert Bosch, which filed the original patent, where two different gas compositions alternate in the reactor. Currently, there are two variations of the DRIE. The first variation consists of three distinct steps (the original Bosch process) while the second variation only consists of two steps.
In the first variation, the etch cycle is as follows:
(i) SF6 isotropic etch;
(ii) C4F8 passivation;
(iii) SF6 anisotropic etch for floor cleaning.
In the 2nd variation, steps (i) and (iii) are combined.
Both variations operate similarly. The C4F8 creates a polymer on the surface of the substrate, and the second gas composition (SF6 and O2) etches the substrate. The polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. Since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. As a result, etching aspect ratios of 50 to 1 can be achieved. The process can easily be used to etch completely through a silicon substrate, and etch rates are 3–6 times higher than wet etching.
After preparing a large number of MEMS devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. For some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. Wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing.
== Manufacturing technologies ==
Bulk micromachining is the oldest paradigm of silicon-based MEMS. The whole thickness of a silicon wafer is used for building the micro-mechanical structures. Silicon is machined using various etching processes. Bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 1990s.
Surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. Surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining MEMS and integrated circuits on the same silicon wafer. The original surface micromachining concept was based on thin polycrystalline silicon layers patterned as movable mechanical structures and released by sacrificial etching of the underlying oxide layer. Interdigital comb electrodes were used to produce in-plane forces and to detect in-plane movement capacitively. This MEMS paradigm has enabled the manufacturing of low cost accelerometers for e.g. automotive air-bag systems and other applications where low performance and/or high g-ranges are sufficient. Analog Devices has pioneered the industrialization of surface micromachining and has realized the co-integration of MEMS and integrated circuits.
Wafer bonding involves joining two or more substrates (usually having the same diameter) to one another to form a composite structure. There are several types of wafer bonding processes that are used in microsystems fabrication including: direct or fusion wafer bonding, wherein two or more wafers are bonded together that are usually made of silicon or some other semiconductor material; anodic bonding wherein a boron-doped glass wafer is bonded to a semiconductor wafer, usually silicon; thermocompression bonding, wherein an intermediary thin-film material layer is used to facilitate wafer bonding; and eutectic bonding, wherein a thin-film layer of gold is used to bond two silicon wafers. Each of these methods have specific uses depending on the circumstances. Most wafer bonding processes rely on three basic criteria for successfully bonding: the wafers to be bonded are sufficiently flat; the wafer surfaces are sufficiently smooth; and the wafer surfaces are sufficiently clean. The most stringent criteria for wafer bonding is usually the direct fusion wafer bonding since even one or more small particulates can render the bonding unsuccessful. In comparison, wafer bonding methods that use intermediary layers are often far more forgiving.
Both bulk and surface silicon micromachining are used in the industrial production of sensors, ink-jet nozzles, and other devices. But in many cases the distinction between these two has diminished. A new etching technology, deep reactive-ion etching, has made it possible to combine good performance typical of bulk micromachining with comb structures and in-plane operation typical of surface micromachining. While it is common in surface micromachining to have structural layer thickness in the range of 2 μm, in HAR silicon micromachining the thickness can be from 10 to 100 μm. The materials commonly used in HAR silicon micromachining are thick polycrystalline silicon, known as epi-poly, and bonded silicon-on-insulator (SOI) wafers although processes for bulk silicon wafer also have been created (SCREAM). Bonding a second wafer by glass frit bonding, anodic bonding or alloy bonding is used to protect the MEMS structures. Integrated circuits are typically not combined with HAR silicon micromachining.
== Applications ==
Some common commercial applications of MEMS include:
Inkjet printers, which use piezoelectrics or thermal bubble ejection to deposit ink on paper.
Accelerometers in modern cars for a large number of purposes including airbag deployment and electronic stability control.
Inertial measurement units (IMUs):
MEMS accelerometers.
MEMS gyroscopes in remote controlled, or autonomous, helicopters, planes and multirotors (also known as drones), used for automatically sensing and balancing flying characteristics of roll, pitch and yaw.
MEMS magnetic field sensor (magnetometer) may also be incorporated in such devices to provide directional heading.
MEMS inertial navigation systems (INSs) of modern cars, airplanes, submarines and other vehicles to detect yaw, pitch, and roll; for example, the autopilot of an airplane.
Accelerometers in consumer electronics devices such as game controllers (Nintendo Wii), personal media players / cell phones (virtually all smartphones, various HTC PDA models), augmented reality (AR) and virtual reality (VR) devices, and a number of digital cameras (various Canon Digital IXUS models). Also used in PCs to park the hard disk head when free-fall is detected, to prevent damage and data loss.
MEMS speakers for Headphones
MEMS barometers.
MEMS microphones in portable devices, e.g., mobile phones, head sets and laptops. The market for smart microphones includes smartphones, wearable devices, smart home and automotive applications.
Precision temperature-compensated resonators in real-time clocks.
Silicon pressure sensors e.g., car tire pressure sensors, and disposable blood pressure sensors.
Displays e.g., the digital micromirror device (DMD) chip in a projector based on DLP technology, which has a surface with several hundred thousand micromirrors or single micro-scanning-mirrors also called microscanners. The MEMS mirrors can also be used in conjunction with laser scanning to project an image.
Optical switching technology, which is used for switching technology and alignment for data communications.
RF switches and relays.
Bio-MEMS applications in medical and health related technologies including lab-on-a-chip (taking advantage of microfluidics and micropumps), biosensors, chemosensors as well as embedded components of medical devices e.g. stents.
Interferometric modulator display (IMOD) applications in consumer electronics (primarily displays for mobile devices), used to create interferometric modulation − reflective display technology as found in mirasol displays.
Fluid acceleration, such as for micro-cooling.
Micro-scale energy harvesting including piezoelectric, electrostatic and electromagnetic micro harvesters.
Micromachined ultrasound transducers.
MEMS-based loudspeakers focusing on applications such as in-ear headphones and hearing aids.
MEMS oscillators.
MEMS-based scanning probe microscopes including atomic force microscopes.
LiDAR (light detection and ranging).
== Industry structure ==
The global market for micro-electromechanical systems, which includes products such as automobile airbag systems, display systems and inkjet cartridges totaled $40 billion in 2006 according to Global MEMS/Microsystems Markets and Opportunities, a research report from SEMI and Yole Development and is forecasted to reach $72 billion by 2011.
Companies with strong MEMS programs come in many sizes. Larger firms specialize in manufacturing high volume inexpensive components or packaged solutions for end markets such as automobiles, biomedical, and electronics. Smaller firms provide value in innovative solutions and absorb the expense of custom fabrication with high sales margins. Both large and small companies typically invest in R&D to explore new MEMS technology.
The market for materials and equipment used to manufacture MEMS devices topped $1 billion worldwide in 2006. Materials demand is driven by substrates, making up over 70 percent of the market, packaging coatings and increasing use of chemical mechanical planarization (CMP). While MEMS manufacturing continues to be dominated by used semiconductor equipment, there is a migration to 200mm lines and select new tools, including etch and bonding for certain MEMS applications.
== See also ==
MEMS sensor generations
Microoptoelectromechanical systems
Microoptomechanical systems
Nanoelectromechanical systems
== References ==
== Further reading ==
Microsystem Technologies, published by Springer Publishing, Journal homepage
Geschke, O.; Klank, H.; Telleman, P., eds. (2004). Microsystem Engineering of Lab-on-a-chip Devices. Wiley. ISBN 3-527-30733-8.
== External links ==
Chollet, F.; Liu, HB. (10 August 2018). A (not so) short introduction to MEMS. ISBN 978-2-9542015-0-4. 5.4. | Wikipedia/Microelectromechanical_systems |
Selenographia, sive Lunae descriptio (Selenography, or A Description of The Moon) was printed in 1647 and is a milestone work by Johannes Hevelius. It includes the first detailed map of the Moon, created from Hevelius's personal observations. In his treatise, Hevelius reflected on the difference between his own work and that of Galileo Galilei. Hevelius remarked that the quality of Galileo's representations of the Moon in Sidereus nuncius (1610) left something to be desired. Selenography was dedicated to King Ladislaus IV of Poland and along with Riccioli/Grimaldi's Almagestum Novum became the standard work on the Moon for over a century. There are many copies that have survived, including those in Bibliothèque nationale de France, in the library of Polish Academy of Sciences, in the Stillman Drake Collection at the Thomas Fisher Rare Books Library at the University of Toronto, and in the Gunnerus Library at the Norwegian University of Science and Technology in Trondheim.
== Notes ==
== External links ==
Selenographia, sive Lunae descriptio | Wikipedia/Selenographia,_sive_Lunae_descriptio |
In mathematics, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element a has the cancellation property, that is, if a ≠ 0, an equality ab = ac implies b = c.
"Integral domain" is defined almost universally as above, but there is some variation. This article follows the convention that rings have a multiplicative identity, generally denoted 1, but some authors do not follow this, by not requiring integral domains to have a multiplicative identity. Noncommutative integral domains are sometimes admitted. This article, however, follows the much more usual convention of reserving the term "integral domain" for the commutative case and using "domain" for the general case including noncommutative rings.
Some sources, notably Lang, use the term entire ring for integral domain.
Some specific kinds of integral domains are given with the following chain of class inclusions:
rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields
== Definition ==
An integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Equivalently:
An integral domain is a nonzero commutative ring with no nonzero zero divisors.
An integral domain is a commutative ring in which the zero ideal {0} is a prime ideal.
An integral domain is a nonzero commutative ring for which every nonzero element is cancellable under multiplication.
An integral domain is a ring for which the set of nonzero elements is a commutative monoid under multiplication (because a monoid must be closed under multiplication).
An integral domain is a nonzero commutative ring in which for every nonzero element r, the function that maps each element x of the ring to the product xr is injective. Elements r with this property are called regular, so it is equivalent to require that every nonzero element of the ring be regular.
An integral domain is a ring that is isomorphic to a subring of a field. (Given an integral domain, one can embed it in its field of fractions.)
== Examples ==
The archetypical example is the ring
Z
{\displaystyle \mathbb {Z} }
of all integers.
Every field is an integral domain. For example, the field
R
{\displaystyle \mathbb {R} }
of all real numbers is an integral domain. Conversely, every Artinian integral domain is a field. In particular, all finite integral domains are finite fields (more generally, by Wedderburn's little theorem, finite domains are finite fields). The ring of integers
Z
{\displaystyle \mathbb {Z} }
provides an example of a non-Artinian infinite integral domain that is not a field, possessing infinite descending sequences of ideals such as:
Z
⊃
2
Z
⊃
⋯
⊃
2
n
Z
⊃
2
n
+
1
Z
⊃
⋯
{\displaystyle \mathbb {Z} \supset 2\mathbb {Z} \supset \cdots \supset 2^{n}\mathbb {Z} \supset 2^{n+1}\mathbb {Z} \supset \cdots }
Rings of polynomials are integral domains if the coefficients come from an integral domain. For instance, the ring
Z
[
x
]
{\displaystyle \mathbb {Z} [x]}
of all polynomials in one variable with integer coefficients is an integral domain; so is the ring
C
[
x
1
,
…
,
x
n
]
{\displaystyle \mathbb {C} [x_{1},\ldots ,x_{n}]}
of all polynomials in n-variables with complex coefficients.
The previous example can be further exploited by taking quotients from prime ideals. For example, the ring
C
[
x
,
y
]
/
(
y
2
−
x
(
x
−
1
)
(
x
−
2
)
)
{\displaystyle \mathbb {C} [x,y]/(y^{2}-x(x-1)(x-2))}
corresponding to a plane elliptic curve is an integral domain. Integrality can be checked by showing
y
2
−
x
(
x
−
1
)
(
x
−
2
)
{\displaystyle y^{2}-x(x-1)(x-2)}
is an irreducible polynomial.
The ring
Z
[
x
]
/
(
x
2
−
n
)
≅
Z
[
n
]
{\displaystyle \mathbb {Z} [x]/(x^{2}-n)\cong \mathbb {Z} [{\sqrt {n}}]}
is an integral domain for any non-square integer
n
{\displaystyle n}
. If
n
>
0
{\displaystyle n>0}
, then this ring is always a subring of
R
{\displaystyle \mathbb {R} }
, otherwise, it is a subring of
C
.
{\displaystyle \mathbb {C} .}
The ring of p-adic integers
Z
p
{\displaystyle \mathbb {Z} _{p}}
is an integral domain.
The ring of formal power series of an integral domain is an integral domain.
If
U
{\displaystyle U}
is a connected open subset of the complex plane
C
{\displaystyle \mathbb {C} }
, then the ring
H
(
U
)
{\displaystyle {\mathcal {H}}(U)}
consisting of all holomorphic functions is an integral domain. The same is true for rings of analytic functions on connected open subsets of analytic manifolds.
A regular local ring is an integral domain. In fact, a regular local ring is a UFD.
== Non-examples ==
The following rings are not integral domains.
The zero ring (the ring in which
0
=
1
{\displaystyle 0=1}
).
The quotient ring
Z
/
m
Z
{\displaystyle \mathbb {Z} /m\mathbb {Z} }
when m is a composite number. To show this, choose a proper factorization
m
=
x
y
{\displaystyle m=xy}
(meaning that
x
{\displaystyle x}
and
y
{\displaystyle y}
are not equal to
1
{\displaystyle 1}
or
m
{\displaystyle m}
). Then
x
≢
0
mod
m
{\displaystyle x\not \equiv 0{\bmod {m}}}
and
y
≢
0
mod
m
{\displaystyle y\not \equiv 0{\bmod {m}}}
, but
x
y
≡
0
mod
m
{\displaystyle xy\equiv 0{\bmod {m}}}
.
A product of two nonzero commutative rings. In such a product
R
×
S
{\displaystyle R\times S}
, one has
(
1
,
0
)
⋅
(
0
,
1
)
=
(
0
,
0
)
{\displaystyle (1,0)\cdot (0,1)=(0,0)}
.
The quotient ring
Z
[
x
]
/
(
x
2
−
n
2
)
{\displaystyle \mathbb {Z} [x]/(x^{2}-n^{2})}
for any
n
∈
Z
{\displaystyle n\in \mathbb {Z} }
. The images of
x
+
n
{\displaystyle x+n}
and
x
−
n
{\displaystyle x-n}
are nonzero, while their product is 0 in this ring.
The ring of n × n matrices over any nonzero ring when n ≥ 2. If
M
{\displaystyle M}
and
N
{\displaystyle N}
are matrices such that the image of
N
{\displaystyle N}
is contained in the kernel of
M
{\displaystyle M}
, then
M
N
=
0
{\displaystyle MN=0}
. For example, this happens for
M
=
N
=
(
0
1
0
0
)
{\displaystyle M=N=({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}})}
.
The quotient ring
k
[
x
1
,
…
,
x
n
]
/
(
f
g
)
{\displaystyle k[x_{1},\ldots ,x_{n}]/(fg)}
for any field
k
{\displaystyle k}
and any non-constant polynomials
f
,
g
∈
k
[
x
1
,
…
,
x
n
]
{\displaystyle f,g\in k[x_{1},\ldots ,x_{n}]}
. The images of f and g in this quotient ring are nonzero elements whose product is 0. This argument shows, equivalently, that
(
f
g
)
{\displaystyle (fg)}
is not a prime ideal. The geometric interpretation of this result is that the zeros of fg form an affine algebraic set that is not irreducible (that is, not an algebraic variety) in general. The only case where this algebraic set may be irreducible is when fg is a power of an irreducible polynomial, which defines the same algebraic set.
The ring of continuous functions on the unit interval. Consider the functions
f
(
x
)
=
{
1
−
2
x
x
∈
[
0
,
1
2
]
0
x
∈
[
1
2
,
1
]
g
(
x
)
=
{
0
x
∈
[
0
,
1
2
]
2
x
−
1
x
∈
[
1
2
,
1
]
{\displaystyle f(x)={\begin{cases}1-2x&x\in \left[0,{\tfrac {1}{2}}\right]\\0&x\in \left[{\tfrac {1}{2}},1\right]\end{cases}}\qquad g(x)={\begin{cases}0&x\in \left[0,{\tfrac {1}{2}}\right]\\2x-1&x\in \left[{\tfrac {1}{2}},1\right]\end{cases}}}
Neither
f
{\displaystyle f}
nor
g
{\displaystyle g}
is everywhere zero, but
f
g
{\displaystyle fg}
is.
The tensor product
C
⊗
R
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} }
. This ring has two non-trivial idempotents,
e
1
=
1
2
(
1
⊗
1
)
−
1
2
(
i
⊗
i
)
{\displaystyle e_{1}={\tfrac {1}{2}}(1\otimes 1)-{\tfrac {1}{2}}(i\otimes i)}
and
e
2
=
1
2
(
1
⊗
1
)
+
1
2
(
i
⊗
i
)
{\displaystyle e_{2}={\tfrac {1}{2}}(1\otimes 1)+{\tfrac {1}{2}}(i\otimes i)}
. They are orthogonal, meaning that
e
1
e
2
=
0
{\displaystyle e_{1}e_{2}=0}
, and hence
C
⊗
R
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} }
is not a domain. In fact, there is an isomorphism
C
×
C
→
C
⊗
R
C
{\displaystyle \mathbb {C} \times \mathbb {C} \to \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} }
defined by
(
z
,
w
)
↦
z
⋅
e
1
+
w
⋅
e
2
{\displaystyle (z,w)\mapsto z\cdot e_{1}+w\cdot e_{2}}
. Its inverse is defined by
z
⊗
w
↦
(
z
w
,
z
w
¯
)
{\displaystyle z\otimes w\mapsto (zw,z{\overline {w}})}
. This example shows that a fiber product of irreducible affine schemes need not be irreducible.
== Divisibility, prime elements, and irreducible elements ==
In this section, R is an integral domain.
Given elements a and b of R, one says that a divides b, or that a is a divisor of b, or that b is a multiple of a, if there exists an element x in R such that ax = b.
The units of R are the elements that divide 1; these are precisely the invertible elements in R. Units divide all other elements.
If a divides b and b divides a, then a and b are associated elements or associates. Equivalently, a and b are associates if a = ub for some unit u.
An irreducible element is a nonzero non-unit that cannot be written as a product of two non-units.
A nonzero non-unit p is a prime element if, whenever p divides a product ab, then p divides a or p divides b. Equivalently, an element p is prime if and only if the principal ideal (p) is a nonzero prime ideal.
Both notions of irreducible elements and prime elements generalize the ordinary definition of prime numbers in the ring
Z
,
{\displaystyle \mathbb {Z} ,}
if one considers as prime the negative primes.
Every prime element is irreducible. The converse is not true in general: for example, in the quadratic integer ring
Z
[
−
5
]
{\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]}
the element 3 is irreducible (if it factored nontrivially, the factors would each have to have norm 3, but there are no norm 3 elements since
a
2
+
5
b
2
=
3
{\displaystyle a^{2}+5b^{2}=3}
has no integer solutions), but not prime (since 3 divides
(
2
+
−
5
)
(
2
−
−
5
)
{\displaystyle \left(2+{\sqrt {-5}}\right)\left(2-{\sqrt {-5}}\right)}
without dividing either factor). In a unique factorization domain (or more generally, a GCD domain), an irreducible element is a prime element.
While unique factorization does not hold in
Z
[
−
5
]
{\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]}
, there is unique factorization of ideals. See Lasker–Noether theorem.
== Properties ==
A commutative ring R is an integral domain if and only if the ideal (0) of R is a prime ideal.
If R is a commutative ring and P is an ideal in R, then the quotient ring R/P is an integral domain if and only if P is a prime ideal.
Let R be an integral domain. Then the polynomial rings over R (in any number of indeterminates) are integral domains. This is in particular the case if R is a field.
The cancellation property holds in any integral domain: for any a, b, and c in an integral domain, if a ≠ 0 and ab = ac then b = c. Another way to state this is that the function x ↦ ax is injective for any nonzero a in the domain.
The cancellation property holds for ideals in any integral domain: if xI = xJ, then either x is zero or I = J.
An integral domain is equal to the intersection of its localizations at maximal ideals.
An inductive limit of integral domains is an integral domain.
If A, B are integral domains over an algebraically closed field k, then A ⊗k B is an integral domain. This is a consequence of Hilbert's nullstellensatz, and, in algebraic geometry, it implies the statement that the coordinate ring of the product of two affine algebraic varieties over an algebraically closed field is again an integral domain.
== Field of fractions ==
The field of fractions K of an integral domain R is the set of fractions a/b with a and b in R and b ≠ 0 modulo an appropriate equivalence relation, equipped with the usual addition and multiplication operations. It is "the smallest field containing R" in the sense that there is an injective ring homomorphism R → K such that any injective ring homomorphism from R to a field factors through K. The field of fractions of the ring of integers
Z
{\displaystyle \mathbb {Z} }
is the field of rational numbers
Q
.
{\displaystyle \mathbb {Q} .}
The field of fractions of a field is isomorphic to the field itself.
== Algebraic geometry ==
Integral domains are characterized by the condition that they are reduced (that is x2 = 0 implies x = 0) and irreducible (that is there is only one minimal prime ideal). The former condition ensures that the nilradical of the ring is zero, so that the intersection of all the ring's minimal primes is zero. The latter condition is that the ring have only one minimal prime. It follows that the unique minimal prime ideal of a reduced and irreducible ring is the zero ideal, so such rings are integral domains. The converse is clear: an integral domain has no nonzero nilpotent elements, and the zero ideal is the unique minimal prime ideal.
This translates, in algebraic geometry, into the fact that the coordinate ring of an affine algebraic set is an integral domain if and only if the algebraic set is an algebraic variety.
More generally, a commutative ring is an integral domain if and only if its spectrum is an integral affine scheme.
== Characteristic and homomorphisms ==
The characteristic of an integral domain is either 0 or a prime number.
If R is an integral domain of prime characteristic p, then the Frobenius endomorphism x ↦ xp is injective.
== See also ==
Dedekind–Hasse norm – the extra structure needed for an integral domain to be principal
Zero-product property
== Notes ==
== Citations ==
== References ==
== External links ==
"where does the term "integral domain" come from?". | Wikipedia/Associate_(ring_theory) |
In number theory, more specifically in local class field theory, the ramification groups are a filtration of the Galois group of a local field extension, which gives detailed information on the ramification phenomena of the extension.
== Ramification theory of valuations ==
In mathematics, the ramification theory of valuations studies the set of extensions of a valuation v of a field K to an extension L of K. It is a generalization of the ramification theory of Dedekind domains.
The structure of the set of extensions is known better when L/K is Galois.
=== Decomposition group and inertia group ===
Let (K, v) be a valued field and let L be a finite Galois extension of K. Let Sv be the set of equivalence classes of extensions of v to L and let G be the Galois group of L over K. Then G acts on Sv by σ[w] = [w ∘ σ] (i.e. w is a representative of the equivalence class [w] ∈ Sv and [w] is sent to the equivalence class of the composition of w with the automorphism σ : L → L; this is independent of the choice of w in [w]). In fact, this action is transitive.
Given a fixed extension w of v to L, the decomposition group of w is the stabilizer subgroup Gw of [w], i.e. it is the subgroup of G consisting of all elements that fix the equivalence class [w] ∈ Sv.
Let mw denote the maximal ideal of w inside the valuation ring Rw of w. The inertia group of w is the subgroup Iw of Gw consisting of elements σ such that σx ≡ x (mod mw) for all x in Rw. In other words, Iw consists of the elements of the decomposition group that act trivially on the residue field of w. It is a normal subgroup of Gw.
The reduced ramification index e(w/v) is independent of w and is denoted e(v). Similarly, the relative degree f(w/v) is also independent of w and is denoted f(v).
== Ramification groups in lower numbering ==
Ramification groups are a refinement of the Galois group
G
{\displaystyle G}
of a finite
L
/
K
{\displaystyle L/K}
Galois extension of local fields. We shall write
w
,
O
L
,
p
{\displaystyle w,{\mathcal {O}}_{L},{\mathfrak {p}}}
for the valuation, the ring of integers and its maximal ideal for
L
{\displaystyle L}
. As a consequence of Hensel's lemma, one can write
O
L
=
O
K
[
α
]
{\displaystyle {\mathcal {O}}_{L}={\mathcal {O}}_{K}[\alpha ]}
for some
α
∈
L
{\displaystyle \alpha \in L}
where
O
K
{\displaystyle {\mathcal {O}}_{K}}
is the ring of integers of
K
{\displaystyle K}
. (This is stronger than the primitive element theorem.) Then, for each integer
i
≥
−
1
{\displaystyle i\geq -1}
, we define
G
i
{\displaystyle G_{i}}
to be the set of all
s
∈
G
{\displaystyle s\in G}
that satisfies the following equivalent conditions.
(i)
s
{\displaystyle s}
operates trivially on
O
L
/
p
i
+
1
.
{\displaystyle {\mathcal {O}}_{L}/{\mathfrak {p}}^{i+1}.}
(ii)
w
(
s
(
x
)
−
x
)
≥
i
+
1
{\displaystyle w(s(x)-x)\geq i+1}
for all
x
∈
O
L
{\displaystyle x\in {\mathcal {O}}_{L}}
(iii)
w
(
s
(
α
)
−
α
)
≥
i
+
1.
{\displaystyle w(s(\alpha )-\alpha )\geq i+1.}
The group
G
i
{\displaystyle G_{i}}
is called
i
{\displaystyle i}
-th ramification group. They form a decreasing filtration,
G
−
1
=
G
⊃
G
0
⊃
G
1
⊃
…
{
∗
}
.
{\displaystyle G_{-1}=G\supset G_{0}\supset G_{1}\supset \dots \{*\}.}
In fact, the
G
i
{\displaystyle G_{i}}
are normal by (i) and trivial for sufficiently large
i
{\displaystyle i}
by (iii). For the lowest indices, it is customary to call
G
0
{\displaystyle G_{0}}
the inertia subgroup of
G
{\displaystyle G}
because of its relation to splitting of prime ideals, while
G
1
{\displaystyle G_{1}}
the wild inertia subgroup of
G
{\displaystyle G}
. The quotient
G
0
/
G
1
{\displaystyle G_{0}/G_{1}}
is called the tame quotient.
The Galois group
G
{\displaystyle G}
and its subgroups
G
i
{\displaystyle G_{i}}
are studied by employing the above filtration or, more specifically, the corresponding quotients. In particular,
G
/
G
0
=
Gal
(
l
/
k
)
,
{\displaystyle G/G_{0}=\operatorname {Gal} (l/k),}
where
l
,
k
{\displaystyle l,k}
are the (finite) residue fields of
L
,
K
{\displaystyle L,K}
.
G
0
=
1
⇔
L
/
K
{\displaystyle G_{0}=1\Leftrightarrow L/K}
is unramified.
G
1
=
1
⇔
L
/
K
{\displaystyle G_{1}=1\Leftrightarrow L/K}
is tamely ramified (i.e., the ramification index is prime to the residue characteristic.)
The study of ramification groups reduces to the totally ramified case since one has
G
i
=
(
G
0
)
i
{\displaystyle G_{i}=(G_{0})_{i}}
for
i
≥
0
{\displaystyle i\geq 0}
.
One also defines the function
i
G
(
s
)
=
w
(
s
(
α
)
−
α
)
,
s
∈
G
{\displaystyle i_{G}(s)=w(s(\alpha )-\alpha ),s\in G}
. (ii) in the above shows
i
G
{\displaystyle i_{G}}
is independent of choice of
α
{\displaystyle \alpha }
and, moreover, the study of the filtration
G
i
{\displaystyle G_{i}}
is essentially equivalent to that of
i
G
{\displaystyle i_{G}}
.
i
G
{\displaystyle i_{G}}
satisfies the following: for
s
,
t
∈
G
{\displaystyle s,t\in G}
,
i
G
(
s
)
≥
i
+
1
⇔
s
∈
G
i
.
{\displaystyle i_{G}(s)\geq i+1\Leftrightarrow s\in G_{i}.}
i
G
(
t
s
t
−
1
)
=
i
G
(
s
)
.
{\displaystyle i_{G}(tst^{-1})=i_{G}(s).}
i
G
(
s
t
)
≥
min
{
i
G
(
s
)
,
i
G
(
t
)
}
.
{\displaystyle i_{G}(st)\geq \min\{i_{G}(s),i_{G}(t)\}.}
Fix a uniformizer
π
{\displaystyle \pi }
of
L
{\displaystyle L}
. Then
s
↦
s
(
π
)
/
π
{\displaystyle s\mapsto s(\pi )/\pi }
induces the injection
G
i
/
G
i
+
1
→
U
L
,
i
/
U
L
,
i
+
1
,
i
≥
0
{\displaystyle G_{i}/G_{i+1}\to U_{L,i}/U_{L,i+1},i\geq 0}
where
U
L
,
0
=
O
L
×
,
U
L
,
i
=
1
+
p
i
{\displaystyle U_{L,0}={\mathcal {O}}_{L}^{\times },U_{L,i}=1+{\mathfrak {p}}^{i}}
. (The map actually does not depend on the choice of the uniformizer.) It follows from this
G
0
/
G
1
{\displaystyle G_{0}/G_{1}}
is cyclic of order prime to
p
{\displaystyle p}
G
i
/
G
i
+
1
{\displaystyle G_{i}/G_{i+1}}
is a product of cyclic groups of order
p
{\displaystyle p}
.
In particular,
G
1
{\displaystyle G_{1}}
is a p-group and
G
0
{\displaystyle G_{0}}
is solvable.
The ramification groups can be used to compute the different
D
L
/
K
{\displaystyle {\mathfrak {D}}_{L/K}}
of the extension
L
/
K
{\displaystyle L/K}
and that of subextensions:
w
(
D
L
/
K
)
=
∑
s
≠
1
i
G
(
s
)
=
∑
i
=
0
∞
(
|
G
i
|
−
1
)
.
{\displaystyle w({\mathfrak {D}}_{L/K})=\sum _{s\neq 1}i_{G}(s)=\sum _{i=0}^{\infty }(|G_{i}|-1).}
If
H
{\displaystyle H}
is a normal subgroup of
G
{\displaystyle G}
, then, for
σ
∈
G
{\displaystyle \sigma \in G}
,
i
G
/
H
(
σ
)
=
1
e
L
/
K
∑
s
↦
σ
i
G
(
s
)
{\displaystyle i_{G/H}(\sigma )={1 \over e_{L/K}}\sum _{s\mapsto \sigma }i_{G}(s)}
.
Combining this with the above one obtains: for a subextension
F
/
K
{\displaystyle F/K}
corresponding to
H
{\displaystyle H}
,
v
F
(
D
F
/
K
)
=
1
e
L
/
F
∑
s
∉
H
i
G
(
s
)
.
{\displaystyle v_{F}({\mathfrak {D}}_{F/K})={1 \over e_{L/F}}\sum _{s\not \in H}i_{G}(s).}
If
s
∈
G
i
,
t
∈
G
j
,
i
,
j
≥
1
{\displaystyle s\in G_{i},t\in G_{j},i,j\geq 1}
, then
s
t
s
−
1
t
−
1
∈
G
i
+
j
+
1
{\displaystyle sts^{-1}t^{-1}\in G_{i+j+1}}
. In the terminology of Lazard, this can be understood to mean the Lie algebra
gr
(
G
1
)
=
∑
i
≥
1
G
i
/
G
i
+
1
{\displaystyle \operatorname {gr} (G_{1})=\sum _{i\geq 1}G_{i}/G_{i+1}}
is abelian.
=== Example: the cyclotomic extension ===
The ramification groups for a cyclotomic extension
K
n
:=
Q
p
(
ζ
)
/
Q
p
{\displaystyle K_{n}:=\mathbf {Q} _{p}(\zeta )/\mathbf {Q} _{p}}
, where
ζ
{\displaystyle \zeta }
is a
p
n
{\displaystyle p^{n}}
-th primitive root of unity, can be described explicitly:
G
s
=
Gal
(
K
n
/
K
e
)
,
{\displaystyle G_{s}=\operatorname {Gal} (K_{n}/K_{e}),}
where e is chosen such that
p
e
−
1
≤
s
<
p
e
{\displaystyle p^{e-1}\leq s<p^{e}}
.
=== Example: a quartic extension ===
Let K be the extension of Q2 generated by
x
1
=
2
+
2
{\displaystyle x_{1}={\sqrt {2+{\sqrt {2}}}}}
. The conjugates of
x
1
{\displaystyle x_{1}}
are
x
2
=
2
−
2
{\displaystyle x_{2}={\sqrt {2-{\sqrt {2}}}}}
,
x
3
=
−
x
1
{\displaystyle x_{3}=-x_{1}}
,
x
4
=
−
x
2
{\displaystyle x_{4}=-x_{2}}
.
A little computation shows that the quotient of any two of these is a unit. Hence they all generate the same ideal; call it π.
2
{\displaystyle {\sqrt {2}}}
generates π2; (2)=π4.
Now
x
1
−
x
3
=
2
x
1
{\displaystyle x_{1}-x_{3}=2x_{1}}
, which is in π5.
and
x
1
−
x
2
=
4
−
2
2
,
{\displaystyle x_{1}-x_{2}={\sqrt {4-2{\sqrt {2}}}},}
which is in π3.
Various methods show that the Galois group of K is
C
4
{\displaystyle C_{4}}
, cyclic of order 4. Also:
G
0
=
G
1
=
G
2
=
C
4
.
{\displaystyle G_{0}=G_{1}=G_{2}=C_{4}.}
and
G
3
=
G
4
=
(
13
)
(
24
)
.
{\displaystyle G_{3}=G_{4}=(13)(24).}
w
(
D
K
/
Q
2
)
=
3
+
3
+
3
+
1
+
1
=
11
,
{\displaystyle w({\mathfrak {D}}_{K/Q_{2}})=3+3+3+1+1=11,}
so that the different
D
K
/
Q
2
=
π
11
{\displaystyle {\mathfrak {D}}_{K/Q_{2}}=\pi ^{11}}
x
1
{\displaystyle x_{1}}
satisfies X4 − 4X2 + 2, which has discriminant 2048 = 211.
== Ramification groups in upper numbering ==
If
u
{\displaystyle u}
is a real number
≥
−
1
{\displaystyle \geq -1}
, let
G
u
{\displaystyle G_{u}}
denote
G
i
{\displaystyle G_{i}}
where i the least integer
≥
u
{\displaystyle \geq u}
. In other words,
s
∈
G
u
⇔
i
G
(
s
)
≥
u
+
1.
{\displaystyle s\in G_{u}\Leftrightarrow i_{G}(s)\geq u+1.}
Define
ϕ
{\displaystyle \phi }
by
ϕ
(
u
)
=
∫
0
u
d
t
(
G
0
:
G
t
)
{\displaystyle \phi (u)=\int _{0}^{u}{dt \over (G_{0}:G_{t})}}
where, by convention,
(
G
0
:
G
t
)
{\displaystyle (G_{0}:G_{t})}
is equal to
(
G
−
1
:
G
0
)
−
1
{\displaystyle (G_{-1}:G_{0})^{-1}}
if
t
=
−
1
{\displaystyle t=-1}
and is equal to
1
{\displaystyle 1}
for
−
1
<
t
≤
0
{\displaystyle -1<t\leq 0}
. Then
ϕ
(
u
)
=
u
{\displaystyle \phi (u)=u}
for
−
1
≤
u
≤
0
{\displaystyle -1\leq u\leq 0}
. It is immediate that
ϕ
{\displaystyle \phi }
is continuous and strictly increasing, and thus has the continuous inverse function
ψ
{\displaystyle \psi }
defined on
[
−
1
,
∞
)
{\displaystyle [-1,\infty )}
. Define
G
v
=
G
ψ
(
v
)
{\displaystyle G^{v}=G_{\psi (v)}}
.
G
v
{\displaystyle G^{v}}
is then called the v-th ramification group in upper numbering. In other words,
G
ϕ
(
u
)
=
G
u
{\displaystyle G^{\phi (u)}=G_{u}}
. Note
G
−
1
=
G
,
G
0
=
G
0
{\displaystyle G^{-1}=G,G^{0}=G_{0}}
. The upper numbering is defined so as to be compatible with passage to quotients: if
H
{\displaystyle H}
is normal in
G
{\displaystyle G}
, then
(
G
/
H
)
v
=
G
v
H
/
H
{\displaystyle (G/H)^{v}=G^{v}H/H}
for all
v
{\displaystyle v}
(whereas lower numbering is compatible with passage to subgroups.)
=== Herbrand's theorem ===
Herbrand's theorem states that the ramification groups in the lower numbering satisfy
G
u
H
/
H
=
(
G
/
H
)
v
{\displaystyle G_{u}H/H=(G/H)_{v}}
(for
v
=
ϕ
L
/
F
(
u
)
{\displaystyle v=\phi _{L/F}(u)}
where
L
/
F
{\displaystyle L/F}
is the subextension corresponding to
H
{\displaystyle H}
), and that the ramification groups in the upper numbering satisfy
G
u
H
/
H
=
(
G
/
H
)
u
{\displaystyle G^{u}H/H=(G/H)^{u}}
. This allows one to define ramification groups in the upper numbering for infinite Galois extensions (such as the absolute Galois group of a local field) from the inverse system of ramification groups for finite subextensions.
The upper numbering for an abelian extension is important because of the Hasse–Arf theorem. It states that if
G
{\displaystyle G}
is abelian, then the jumps in the filtration
G
v
{\displaystyle G^{v}}
are integers; i.e.,
G
i
=
G
i
+
1
{\displaystyle G_{i}=G_{i+1}}
whenever
ϕ
(
i
)
{\displaystyle \phi (i)}
is not an integer.
The upper numbering is compatible with the filtration of the norm residue group by the unit groups under the Artin isomorphism. The image of
G
n
(
L
/
K
)
{\displaystyle G^{n}(L/K)}
under the isomorphism
G
(
L
/
K
)
a
b
↔
K
∗
/
N
L
/
K
(
L
∗
)
{\displaystyle G(L/K)^{\mathrm {ab} }\leftrightarrow K^{*}/N_{L/K}(L^{*})}
is just
U
K
n
/
(
U
K
n
∩
N
L
/
K
(
L
∗
)
)
.
{\displaystyle U_{K}^{n}/(U_{K}^{n}\cap N_{L/K}(L^{*}))\ .}
== See also ==
Finite extensions of local fields
== Notes ==
== References ==
B. Conrad, Math 248A. Higher ramification groups
Fröhlich, A.; Taylor, M.J. (1991). Algebraic number theory. Cambridge studies in advanced mathematics. Vol. 27. Cambridge University Press. ISBN 0-521-36664-X. Zbl 0744.11001.
Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021.
Serre, Jean-Pierre (1967). "VI. Local class field theory". In Cassels, J.W.S.; Fröhlich, A. (eds.). Algebraic number theory. Proceedings of an instructional conference organized by the London Mathematical Society (a NATO Advanced Study Institute) with the support of the International Mathematical Union. London: Academic Press. pp. 128–161. Zbl 0153.07403.
Serre, Jean-Pierre (1979). Local Fields. Graduate Texts in Mathematics. Vol. 67. Translated by Greenberg, Marvin Jay. Berlin, New York: Springer-Verlag. ISBN 0-387-90424-7. MR 0554237. Zbl 0423.12016.
Snaith, Victor P. (1994). Galois module structure. Fields Institute monographs. Providence, RI: American Mathematical Society. ISBN 0-8218-0264-X. Zbl 0830.11042. | Wikipedia/Ramification_theory_of_valuations |
In abstract algebra, a completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have a simpler structure than general ones, and Hensel's lemma applies to them. In algebraic geometry, a completion of a ring of functions R on a space X concentrates on a formal neighborhood of a point of X: heuristically, this is a neighborhood so small that all Taylor series centered at the point are convergent. An algebraic completion is constructed in a manner analogous to completion of a metric space with Cauchy sequences, and agrees with it in the case when R has a metric given by a non-Archimedean absolute value.
== General construction ==
Suppose that E is an abelian group with a descending filtration
E
=
F
0
E
⊃
F
1
E
⊃
F
2
E
⊃
⋯
{\displaystyle E=F^{0}E\supset F^{1}E\supset F^{2}E\supset \cdots \,}
of subgroups. One then defines the completion (with respect to the filtration) as the inverse limit:
E
^
=
lim
←
(
E
/
F
n
E
)
=
{
(
a
n
¯
)
n
≥
0
∈
∏
n
≥
0
(
E
/
F
n
E
)
|
a
i
≡
a
j
(
mod
F
i
E
)
for all
i
≤
j
}
.
{\displaystyle {\widehat {E}}=\varprojlim (E/F^{n}E)=\left\{\left.({\overline {a_{n}}})_{n\geq 0}\in \prod _{n\geq 0}(E/F^{n}E)\;\right|\;a_{i}\equiv a_{j}{\pmod {F^{i}E}}{\text{ for all }}i\leq j\right\}.\,}
This is again an abelian group. Usually E is an additive abelian group. If E has additional algebraic structure compatible with the filtration, for instance E is a filtered ring, a filtered module, or a filtered vector space, then its completion is again an object with the same structure that is complete in the topology determined by the filtration. This construction may be applied both to commutative and noncommutative rings. As may be expected, when the intersection of the
F
i
E
{\displaystyle F^{i}E}
equals zero, this produces a complete topological ring.
== Krull topology ==
In commutative algebra, the filtration on a commutative ring R by the powers of a proper ideal I determines the Krull (after Wolfgang Krull) or I-adic topology on R. The case of a maximal ideal
I
=
m
{\displaystyle I={\mathfrak {m}}}
is especially important, for example the distinguished maximal ideal of a valuation ring. The basis of open neighbourhoods of 0 in R is given by the powers In, which are nested and form a descending filtration on R:
F
0
R
=
R
⊃
I
⊃
I
2
⊃
⋯
,
F
n
R
=
I
n
.
{\displaystyle F^{0}R=R\supset I\supset I^{2}\supset \cdots ,\quad F^{n}R=I^{n}.}
(Open neighborhoods of any r ∈ R are given by cosets r + In.) The (I-adic) completion is the inverse limit of the factor rings,
R
^
I
=
lim
←
(
R
/
I
n
)
{\displaystyle {\widehat {R}}_{I}=\varprojlim (R/I^{n})}
pronounced "R I hat". The kernel of the canonical map π from the ring to its completion is the intersection of the powers of I. Thus π is injective if and only if this intersection reduces to the zero element of the ring; by the Krull intersection theorem, this is the case for any commutative Noetherian ring which is an integral domain or a local ring.
There is a related topology on R-modules, also called Krull or I-adic topology. A basis of open neighborhoods of a module M is given by the sets of the form
x
+
I
n
M
for
x
∈
M
.
{\displaystyle x+I^{n}M\quad {\text{for }}x\in M.}
The I-adic completion of an R-module M is the inverse limit of the quotients
M
^
I
=
lim
←
(
M
/
I
n
M
)
.
{\displaystyle {\widehat {M}}_{I}=\varprojlim (M/I^{n}M).}
This procedure converts any module over R into a complete topological module over
R
^
I
{\displaystyle {\widehat {R}}_{I}}
if I is finitely generated.
== Examples ==
The ring of p-adic integers
Z
p
{\displaystyle \mathbb {Z} _{p}}
is obtained by completing the ring
Z
{\displaystyle \mathbb {Z} }
of integers at the ideal (p).
Let R = K[x1,...,xn] be the polynomial ring in n variables over a field K and
m
=
(
x
1
,
…
,
x
n
)
{\displaystyle {\mathfrak {m}}=(x_{1},\ldots ,x_{n})}
be the maximal ideal generated by the variables. Then the completion
R
^
m
{\displaystyle {\widehat {R}}_{\mathfrak {m}}}
is the ring K[[x1,...,xn]] of formal power series in n variables over K.
Given a noetherian ring
R
{\displaystyle R}
and an ideal
I
=
(
f
1
,
…
,
f
n
)
,
{\displaystyle I=(f_{1},\ldots ,f_{n}),}
the
I
{\displaystyle I}
-adic completion of
R
{\displaystyle R}
is an image of a formal power series ring, specifically, the image of the surjection
{
R
[
[
x
1
,
…
,
x
n
]
]
→
R
^
I
x
i
↦
f
i
{\displaystyle {\begin{cases}R[[x_{1},\ldots ,x_{n}]]\to {\widehat {R}}_{I}\\x_{i}\mapsto f_{i}\end{cases}}}
The kernel is the ideal
(
x
1
−
f
1
,
…
,
x
n
−
f
n
)
.
{\displaystyle (x_{1}-f_{1},\ldots ,x_{n}-f_{n}).}
Completions can also be used to analyze the local structure of singularities of a scheme. For example, the affine schemes associated to
C
[
x
,
y
]
/
(
x
y
)
{\displaystyle \mathbb {C} [x,y]/(xy)}
and the nodal cubic plane curve
C
[
x
,
y
]
/
(
y
2
−
x
2
(
1
+
x
)
)
{\displaystyle \mathbb {C} [x,y]/(y^{2}-x^{2}(1+x))}
have similar looking singularities at the origin when viewing their graphs (both look like a plus sign). Notice that in the second case, any Zariski neighborhood of the origin is still an irreducible curve. If we use completions, then we are looking at a "small enough" neighborhood where the node has two components. Taking the localizations of these rings along the ideal
(
x
,
y
)
{\displaystyle (x,y)}
and completing gives
C
[
[
x
,
y
]
]
/
(
x
y
)
{\displaystyle \mathbb {C} [[x,y]]/(xy)}
and
C
[
[
x
,
y
]
]
/
(
(
y
+
u
)
(
y
−
u
)
)
{\displaystyle \mathbb {C} [[x,y]]/((y+u)(y-u))}
respectively, where
u
{\displaystyle u}
is the formal square root of
x
2
(
1
+
x
)
{\displaystyle x^{2}(1+x)}
in
C
[
[
x
,
y
]
]
.
{\displaystyle \mathbb {C} [[x,y]].}
More explicitly, the power series:
u
=
x
1
+
x
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
)
!
(
1
−
2
n
)
(
n
!
)
2
(
4
n
)
x
n
+
1
.
{\displaystyle u=x{\sqrt {1+x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)(n!)^{2}(4^{n})}}x^{n+1}.}
Since both rings are given by the intersection of two ideals generated by a homogeneous degree 1 polynomial, we can see algebraically that the singularities "look" the same. This is because such a scheme is the union of two non-equal linear subspaces of the affine plane.
== Properties ==
The completion of a Noetherian ring with respect to some ideal is a Noetherian ring.
The completion of a Noetherian local ring with respect to the unique maximal ideal is a Noetherian local ring.
The completion is a functorial operation: a continuous map f: R → S of topological rings gives rise to a map of their completions,
f
^
:
R
^
→
S
^
.
{\displaystyle {\widehat {f}}:{\widehat {R}}\to {\widehat {S}}.}
Moreover, if M and N are two modules over the same topological ring R and f: M → N is a continuous module map then f uniquely extends to the map of the completions:
f
^
:
M
^
→
N
^
,
{\displaystyle {\widehat {f}}:{\widehat {M}}\to {\widehat {N}},}
where
M
^
,
N
^
{\displaystyle {\widehat {M}},{\widehat {N}}}
are modules over
R
^
.
{\displaystyle {\widehat {R}}.}
The completion of a Noetherian ring R is a flat module over R.
The completion of a finitely generated module M over a Noetherian ring R can be obtained by extension of scalars:
M
^
=
M
⊗
R
R
^
.
{\displaystyle {\widehat {M}}=M\otimes _{R}{\widehat {R}}.}
Together with the previous property, this implies that the functor of completion on finitely generated R-modules is exact: it preserves short exact sequences. In particular, taking quotients of rings commutes with completion, meaning that for any quotient R-algebra
R
/
I
{\displaystyle R/I}
, there is an isomorphism
R
/
I
^
≅
R
^
/
I
^
.
{\displaystyle {\widehat {R/I}}\cong {\widehat {R}}/{\widehat {I}}.}
Cohen structure theorem (equicharacteristic case). Let R be a complete local Noetherian commutative ring with maximal ideal
m
{\displaystyle {\mathfrak {m}}}
and residue field K. If R contains a field, then
R
≃
K
[
[
x
1
,
…
,
x
n
]
]
/
I
{\displaystyle R\simeq K[[x_{1},\ldots ,x_{n}]]/I}
for some n and some ideal I (Eisenbud, Theorem 7.7).
== See also ==
Formal scheme
Profinite integer
Locally compact field
Zariski ring
Linear topology
Quasi-unmixed ring
== Citations ==
== References == | Wikipedia/Completion_(algebra) |
In mathematics, the restriction of a function
f
{\displaystyle f}
is a new function, denoted
f
|
A
{\displaystyle f\vert _{A}}
or
f
↾
A
,
{\displaystyle f{\upharpoonright _{A}},}
obtained by choosing a smaller domain
A
{\displaystyle A}
for the original function
f
.
{\displaystyle f.}
The function
f
{\displaystyle f}
is then said to extend
f
|
A
.
{\displaystyle f\vert _{A}.}
== Formal definition ==
Let
f
:
E
→
F
{\displaystyle f:E\to F}
be a function from a set
E
{\displaystyle E}
to a set
F
.
{\displaystyle F.}
If a set
A
{\displaystyle A}
is a subset of
E
,
{\displaystyle E,}
then the restriction of
f
{\displaystyle f}
to
A
{\displaystyle A}
is the function
f
|
A
:
A
→
F
{\displaystyle {f|}_{A}:A\to F}
given by
f
|
A
(
x
)
=
f
(
x
)
{\displaystyle {f|}_{A}(x)=f(x)}
for
x
∈
A
.
{\displaystyle x\in A.}
Informally, the restriction of
f
{\displaystyle f}
to
A
{\displaystyle A}
is the same function as
f
,
{\displaystyle f,}
but is only defined on
A
{\displaystyle A}
.
If the function
f
{\displaystyle f}
is thought of as a relation
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
on the Cartesian product
E
×
F
,
{\displaystyle E\times F,}
then the restriction of
f
{\displaystyle f}
to
A
{\displaystyle A}
can be represented by its graph,
G
(
f
|
A
)
=
{
(
x
,
f
(
x
)
)
∈
G
(
f
)
:
x
∈
A
}
=
G
(
f
)
∩
(
A
×
F
)
,
{\displaystyle G({f|}_{A})=\{(x,f(x))\in G(f):x\in A\}=G(f)\cap (A\times F),}
where the pairs
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
represent ordered pairs in the graph
G
.
{\displaystyle G.}
=== Extensions ===
A function
F
{\displaystyle F}
is said to be an extension of another function
f
{\displaystyle f}
if whenever
x
{\displaystyle x}
is in the domain of
f
{\displaystyle f}
then
x
{\displaystyle x}
is also in the domain of
F
{\displaystyle F}
and
f
(
x
)
=
F
(
x
)
.
{\displaystyle f(x)=F(x).}
That is, if
domain
f
⊆
domain
F
{\displaystyle \operatorname {domain} f\subseteq \operatorname {domain} F}
and
F
|
domain
f
=
f
.
{\displaystyle F{\big \vert }_{\operatorname {domain} f}=f.}
A linear extension (respectively, continuous extension, etc.) of a function
f
{\displaystyle f}
is an extension of
f
{\displaystyle f}
that is also a linear map (respectively, a continuous map, etc.).
== Examples ==
The restriction of the non-injective function
f
:
R
→
R
,
x
↦
x
2
{\displaystyle f:\mathbb {R} \to \mathbb {R} ,\ x\mapsto x^{2}}
to the domain
R
+
=
[
0
,
∞
)
{\displaystyle \mathbb {R} _{+}=[0,\infty )}
is the injection
f
:
R
+
→
R
,
x
↦
x
2
.
{\displaystyle f:\mathbb {R} _{+}\to \mathbb {R} ,\ x\mapsto x^{2}.}
The factorial function is the restriction of the gamma function to the positive integers, with the argument shifted by one:
Γ
|
Z
+
(
n
)
=
(
n
−
1
)
!
{\displaystyle {\Gamma |}_{\mathbb {Z} ^{+}}\!(n)=(n-1)!}
== Properties of restrictions ==
Restricting a function
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
to its entire domain
X
{\displaystyle X}
gives back the original function, that is,
f
|
X
=
f
.
{\displaystyle f|_{X}=f.}
Restricting a function twice is the same as restricting it once, that is, if
A
⊆
B
⊆
dom
f
,
{\displaystyle A\subseteq B\subseteq \operatorname {dom} f,}
then
(
f
|
B
)
|
A
=
f
|
A
.
{\displaystyle \left(f|_{B}\right)|_{A}=f|_{A}.}
The restriction of the identity function on a set
X
{\displaystyle X}
to a subset
A
{\displaystyle A}
of
X
{\displaystyle X}
is just the inclusion map from
A
{\displaystyle A}
into
X
.
{\displaystyle X.}
The restriction of a continuous function is continuous.
== Applications ==
=== Inverse functions ===
For a function to have an inverse, it must be one-to-one. If a function
f
{\displaystyle f}
is not one-to-one, it may be possible to define a partial inverse of
f
{\displaystyle f}
by restricting the domain. For example, the function
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
defined on the whole of
R
{\displaystyle \mathbb {R} }
is not one-to-one since
x
2
=
(
−
x
)
2
{\displaystyle x^{2}=(-x)^{2}}
for any
x
∈
R
.
{\displaystyle x\in \mathbb {R} .}
However, the function becomes one-to-one if we restrict to the domain
R
≥
0
=
[
0
,
∞
)
,
{\displaystyle \mathbb {R} _{\geq 0}=[0,\infty ),}
in which case
f
−
1
(
y
)
=
y
.
{\displaystyle f^{-1}(y)={\sqrt {y}}.}
(If we instead restrict to the domain
(
−
∞
,
0
]
,
{\displaystyle (-\infty ,0],}
then the inverse is the negative of the square root of
y
.
{\displaystyle y.}
) Alternatively, there is no need to restrict the domain if we allow the inverse to be a multivalued function.
=== Selection operators ===
In relational algebra, a selection (sometimes called a restriction to avoid confusion with SQL's use of SELECT) is a unary operation written as
σ
a
θ
b
(
R
)
{\displaystyle \sigma _{a\theta b}(R)}
or
σ
a
θ
v
(
R
)
{\displaystyle \sigma _{a\theta v}(R)}
where:
a
{\displaystyle a}
and
b
{\displaystyle b}
are attribute names,
θ
{\displaystyle \theta }
is a binary operation in the set
{
<
,
≤
,
=
,
≠
,
≥
,
>
}
,
{\displaystyle \{<,\leq ,=,\neq ,\geq ,>\},}
v
{\displaystyle v}
is a value constant,
R
{\displaystyle R}
is a relation.
The selection
σ
a
θ
b
(
R
)
{\displaystyle \sigma _{a\theta b}(R)}
selects all those tuples in
R
{\displaystyle R}
for which
θ
{\displaystyle \theta }
holds between the
a
{\displaystyle a}
and the
b
{\displaystyle b}
attribute.
The selection
σ
a
θ
v
(
R
)
{\displaystyle \sigma _{a\theta v}(R)}
selects all those tuples in
R
{\displaystyle R}
for which
θ
{\displaystyle \theta }
holds between the
a
{\displaystyle a}
attribute and the value
v
.
{\displaystyle v.}
Thus, the selection operator restricts to a subset of the entire database.
=== The pasting lemma ===
The pasting lemma is a result in topology that relates the continuity of a function with the continuity of its restrictions to subsets.
Let
X
,
Y
{\displaystyle X,Y}
be two closed subsets (or two open subsets) of a topological space
A
{\displaystyle A}
such that
A
=
X
∪
Y
,
{\displaystyle A=X\cup Y,}
and let
B
{\displaystyle B}
also be a topological space. If
f
:
A
→
B
{\displaystyle f:A\to B}
is continuous when restricted to both
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
then
f
{\displaystyle f}
is continuous.
This result allows one to take two continuous functions defined on closed (or open) subsets of a topological space and create a new one.
=== Sheaves ===
Sheaves provide a way of generalizing restrictions to objects besides functions.
In sheaf theory, one assigns an object
F
(
U
)
{\displaystyle F(U)}
in a category to each open set
U
{\displaystyle U}
of a topological space, and requires that the objects satisfy certain conditions. The most important condition is that there are restriction morphisms between every pair of objects associated to nested open sets; that is, if
V
⊆
U
,
{\displaystyle V\subseteq U,}
then there is a morphism
res
V
,
U
:
F
(
U
)
→
F
(
V
)
{\displaystyle \operatorname {res} _{V,U}:F(U)\to F(V)}
satisfying the following properties, which are designed to mimic the restriction of a function:
For every open set
U
{\displaystyle U}
of
X
,
{\displaystyle X,}
the restriction morphism
res
U
,
U
:
F
(
U
)
→
F
(
U
)
{\displaystyle \operatorname {res} _{U,U}:F(U)\to F(U)}
is the identity morphism on
F
(
U
)
.
{\displaystyle F(U).}
If we have three open sets
W
⊆
V
⊆
U
,
{\displaystyle W\subseteq V\subseteq U,}
then the composite
res
W
,
V
∘
res
V
,
U
=
res
W
,
U
.
{\displaystyle \operatorname {res} _{W,V}\circ \operatorname {res} _{V,U}=\operatorname {res} _{W,U}.}
(Locality) If
(
U
i
)
{\displaystyle \left(U_{i}\right)}
is an open covering of an open set
U
,
{\displaystyle U,}
and if
s
,
t
∈
F
(
U
)
{\displaystyle s,t\in F(U)}
are such that
s
|
U
i
=
t
|
U
i
{\displaystyle s{\big \vert }_{U_{i}}=t{\big \vert }_{U_{i}}}
for each set
U
i
{\displaystyle U_{i}}
of the covering, then
s
=
t
{\displaystyle s=t}
; and
(Gluing) If
(
U
i
)
{\displaystyle \left(U_{i}\right)}
is an open covering of an open set
U
,
{\displaystyle U,}
and if for each
i
{\displaystyle i}
a section
x
i
∈
F
(
U
i
)
{\displaystyle x_{i}\in F\left(U_{i}\right)}
is given such that for each pair
U
i
,
U
j
{\displaystyle U_{i},U_{j}}
of the covering sets the restrictions of
s
i
{\displaystyle s_{i}}
and
s
j
{\displaystyle s_{j}}
agree on the overlaps:
s
i
|
U
i
∩
U
j
=
s
j
|
U
i
∩
U
j
,
{\displaystyle s_{i}{\big \vert }_{U_{i}\cap U_{j}}=s_{j}{\big \vert }_{U_{i}\cap U_{j}},}
then there is a section
s
∈
F
(
U
)
{\displaystyle s\in F(U)}
such that
s
|
U
i
=
s
i
{\displaystyle s{\big \vert }_{U_{i}}=s_{i}}
for each
i
.
{\displaystyle i.}
The collection of all such objects is called a sheaf. If only the first two properties are satisfied, it is a pre-sheaf.
== Left- and right-restriction ==
More generally, the restriction (or domain restriction or left-restriction)
A
◃
R
{\displaystyle A\triangleleft R}
of a binary relation
R
{\displaystyle R}
between
E
{\displaystyle E}
and
F
{\displaystyle F}
may be defined as a relation having domain
A
,
{\displaystyle A,}
codomain
F
{\displaystyle F}
and graph
G
(
A
◃
R
)
=
{
(
x
,
y
)
∈
F
(
R
)
:
x
∈
A
}
.
{\displaystyle G(A\triangleleft R)=\{(x,y)\in F(R):x\in A\}.}
Similarly, one can define a right-restriction or range restriction
R
▹
B
.
{\displaystyle R\triangleright B.}
Indeed, one could define a restriction to
n
{\displaystyle n}
-ary relations, as well as to subsets understood as relations, such as ones of the Cartesian product
E
×
F
{\displaystyle E\times F}
for binary relations.
These cases do not fit into the scheme of sheaves.
== Anti-restriction ==
The domain anti-restriction (or domain subtraction) of a function or binary relation
R
{\displaystyle R}
(with domain
E
{\displaystyle E}
and codomain
F
{\displaystyle F}
) by a set
A
{\displaystyle A}
may be defined as
(
E
∖
A
)
◃
R
{\displaystyle (E\setminus A)\triangleleft R}
; it removes all elements of
A
{\displaystyle A}
from the domain
E
.
{\displaystyle E.}
It is sometimes denoted
A
{\displaystyle A}
⩤
R
.
{\displaystyle R.}
Similarly, the range anti-restriction (or range subtraction) of a function or binary relation
R
{\displaystyle R}
by a set
B
{\displaystyle B}
is defined as
R
▹
(
F
∖
B
)
{\displaystyle R\triangleright (F\setminus B)}
; it removes all elements of
B
{\displaystyle B}
from the codomain
F
.
{\displaystyle F.}
It is sometimes denoted
R
{\displaystyle R}
⩥
B
.
{\displaystyle B.}
== See also ==
Constraint – Condition of an optimization problem which the solution must satisfy
Deformation retract – Continuous, position-preserving mapping from a topological space into a subspacePages displaying short descriptions of redirect targets
Local property – property which occurs on sufficiently small or arbitrarily small neighborhoods of pointsPages displaying wikidata descriptions as a fallback
Function (mathematics) § Restriction and extension
Binary relation § Restriction
Relational algebra § Selection (σ)
== References == | Wikipedia/Function_restriction |
Argumentation theory is the interdisciplinary study of how conclusions can be supported or undermined by premises through logical reasoning. With historical origins in logic, dialectic, and rhetoric, argumentation theory includes the arts and sciences of civil debate, dialogue, conversation, and persuasion. It studies rules of inference, logic, and procedural rules in both artificial and real-world settings.
Argumentation includes various forms of dialogue such as deliberation and negotiation which are concerned with collaborative decision-making procedures. It also encompasses eristic dialogue, the branch of social debate in which victory over an opponent is the primary goal, and didactic dialogue used for teaching. This discipline also studies the means by which people can express and rationally resolve or at least manage their disagreements.
Argumentation is a daily occurrence, such as in public debate, science, and law. For example in law, in courts by the judge, the parties and the prosecutor, in presenting and testing the validity of evidences. Also, argumentation scholars study the post hoc rationalizations by which organizational actors try to justify decisions they have made irrationally.
Argumentation is one of four rhetorical modes (also known as modes of discourse), along with exposition, description, and narration.
== Key components of argumentation ==
Some key components of argumentation are:
Understanding and identifying arguments, either explicit or implied, and the goals of the participants in the different types of dialogue.
Identifying the premises from which conclusions are derived.
Establishing the "burden of proof" – determining who made the initial claim and is thus responsible for providing evidence why their position merits acceptance.
For the one carrying the "burden of proof", the advocate, to marshal evidence for their position in order to convince or force the opponent's acceptance. The method by which this is accomplished is producing valid, sound, and cogent arguments, devoid of weaknesses, and not easily attacked.
In a debate, fulfillment of the burden of proof creates a burden of rejoinder. One must try to identify faulty reasoning in the opponent's argument, to attack the reasons/premises of the argument, to provide counterexamples if possible, to identify any fallacies, and to show why a valid conclusion cannot be derived from the reasons provided for their argument.
For example, consider the following exchange, illustrating the No true Scotsman fallacy:
Argument: "No Scotsman puts sugar on his porridge."
Reply: "But my friend Angus, who is a Scotsman, likes sugar with his porridge."
Rebuttal: "Well perhaps, but no true Scotsman puts sugar on his porridge."
In this dialogue, the proposer first offers a premise, the premise is challenged by the interlocutor, and so the proposer offers a modification of the premise, which is designed only to evade the challenge provided.
== Internal structure of arguments ==
Typically an argument has an internal structure, comprising the following:
a set of assumptions or premises,
a method of reasoning or deduction, and
a conclusion or point.
An argument has one or more premises and one conclusion.
Often classical logic is used as the method of reasoning so that the conclusion follows logically from the assumptions or support. One challenge is that if the set of assumptions is inconsistent then anything can follow logically from inconsistency. Therefore, it is common to insist that the set of assumptions be consistent. It is also good practice to require the set of assumptions to be the minimal set, with respect to set inclusion, necessary to infer the consequent. Such arguments are called MINCON arguments, short for minimal consistent. Such argumentation has been applied to the fields of law and medicine.
A non-classical approach to argumentation investigates abstract arguments, where 'argument' is considered a primitive term, so no internal structure of arguments is taken into account.
== Types of dialogue ==
In its most common form, argumentation involves an individual and an interlocutor or opponent engaged in dialogue, each contending differing positions and trying to persuade each other, but there are various types of dialogue:
Persuasion dialogue aims to resolve conflicting points of view of different positions.
Negotiation aims to resolve conflicts of interests by cooperation and dealmaking.
Inquiry aims to resolve general ignorance by the growth of knowledge.
Deliberation aims to resolve a need to take action by reaching a decision.
Information seeking aims to reduce one party's ignorance by requesting information from another party that is in a position to know something.
Eristic aims to resolve a situation of antagonism through verbal fighting.
== Argumentation and the grounds of knowledge ==
Argumentation theory had its origins in foundationalism, a theory of knowledge (epistemology) in the field of philosophy. It sought to find the grounds for claims in the forms (logic) and materials (factual laws) of a universal system of knowledge. The dialectical method was made famous by Plato and his use of Socrates critically questioning various characters and historical figures. But argument scholars gradually rejected Aristotle's systematic philosophy and the idealism in Plato and Kant. They questioned and ultimately discarded the idea that argument premises take their soundness from formal philosophical systems. The field thus broadened.
One of the original contributors to this trend was the philosopher Chaïm Perelman, who together with Lucie Olbrechts-Tyteca introduced the French term la nouvelle rhetorique in 1958 to describe an approach to argument which is not reduced to application of formal rules of inference. Perelman's view of argumentation is much closer to a juridical one, in which rules for presenting evidence and rebuttals play an important role.
Karl R. Wallace's seminal essay, "The Substance of Rhetoric: Good Reasons" in the Quarterly Journal of Speech (1963) 44, led many scholars to study "marketplace argumentation" – the ordinary arguments of ordinary people. The seminal essay on marketplace argumentation is Ray Lynn Anderson's and C. David Mortensen's "Logic and Marketplace Argumentation" Quarterly Journal of Speech 53 (1967): 143–150. This line of thinking led to a natural alliance with late developments in the sociology of knowledge. Some scholars drew connections with recent developments in philosophy, namely the pragmatism of John Dewey and Richard Rorty. Rorty has called this shift in emphasis "the linguistic turn".
In this new hybrid approach argumentation is used with or without empirical evidence to establish convincing conclusions about issues which are moral, scientific, epistemic, or of a nature in which science alone cannot answer. Out of pragmatism and many intellectual developments in the humanities and social sciences, "non-philosophical" argumentation theories grew which located the formal and material grounds of arguments in particular intellectual fields. These theories include informal logic, social epistemology, ethnomethodology, speech acts, the sociology of knowledge, the sociology of science, and social psychology. These new theories are not non-logical or anti-logical. They find logical coherence in most communities of discourse. These theories are thus often labeled "sociological" in that they focus on the social grounds of knowledge.
== Kinds of argumentation ==
=== Conversational argumentation ===
The study of naturally occurring conversation arose from the field of sociolinguistics. It is usually called conversation analysis (CA). Inspired by ethnomethodology, it was developed in the late 1960s and early 1970s principally by the sociologist Harvey Sacks and, among others, his close associates Emanuel Schegloff and Gail Jefferson. Sacks died early in his career, but his work was championed by others in his field, and CA has now become an established force in sociology, anthropology, linguistics, speech-communication and psychology. It is particularly influential in interactional sociolinguistics, discourse analysis and discursive psychology, as well as being a coherent discipline in its own right. Recently CA techniques of sequential analysis have been employed by phoneticians to explore the fine phonetic details of speech.
Empirical studies and theoretical formulations by Sally Jackson and Scott Jacobs, and several generations of their students, have described argumentation as a form of managing conversational disagreement within communication contexts and systems that naturally prefer agreement.
=== Mathematical argumentation ===
The basis of mathematical truth has been the subject of long debate. Frege in particular sought to demonstrate (see Gottlob Frege, The Foundations of Arithmetic, 1884, and Begriffsschrift, 1879) that arithmetical truths can be derived from purely logical axioms and therefore are, in the end, logical truths. The project was developed by Russell and Whitehead in their Principia Mathematica. If an argument can be cast in the form of sentences in symbolic logic, then it can be tested by the application of accepted proof procedures. This was carried out for arithmetic using Peano axioms, and the foundation most commonly used for most modern mathematics is Zermelo-Fraenkel set theory, with or without the Axiom of Choice. Be that as it may, an argument in mathematics, as in any other discipline, can be considered valid only if it can be shown that it cannot have true premises and a false conclusion.
=== Scientific argumentation ===
Perhaps the most radical statement of the social grounds of scientific knowledge appears in Alan G.Gross's The Rhetoric of Science (Cambridge: Harvard University Press, 1990). Gross holds that science is rhetorical "without remainder", meaning that scientific knowledge itself cannot be seen as an idealized ground of knowledge. Scientific knowledge is produced rhetorically, meaning that it has special epistemic authority only insofar as its communal methods of verification are trustworthy. This thinking represents an almost complete rejection of the foundationalism on which argumentation was first based.
=== Interpretive argumentation ===
Interpretive argumentation is a dialogical process in which participants explore and/or resolve interpretations often of a text of any medium containing significant ambiguity in meaning.
Interpretive argumentation is pertinent to the humanities, hermeneutics, literary theory, linguistics, semantics, pragmatics, semiotics, analytic philosophy and aesthetics. Topics in conceptual interpretation include aesthetic, judicial, logical and religious interpretation. Topics in scientific interpretation include scientific modeling.
=== Legal argumentation ===
==== By lawyers ====
Legal arguments are spoken presentations to a judge or appellate court by a lawyer, or parties when representing themselves of the legal reasons why they should prevail. Oral argument at the appellate level accompanies written briefs, which also advance the argument of each party in the legal dispute. A closing argument, or summation, is the concluding statement of each party's counsel reiterating the important arguments for the trier of fact, often the jury, in a court case. A closing argument occurs after the presentation of evidence.
==== By judges ====
A judicial opinion or legal opinion is in certain jurisdictions a written explanation by a judge or group of judges that accompanies an order or ruling in a case, laying out the rationale (justification) and legal principles for the ruling. It cites the decision reached to resolve the dispute. A judicial opinion usually includes the reasons behind the decision. Where there are three or more judges, it may take the form of a majority opinion, minority opinion or a concurring opinion.
=== Political argumentation ===
Political arguments are used by academics, media pundits, candidates for political office and government officials. Political arguments are also used by citizens in ordinary interactions to comment about and understand political events. The rationality of the public is a major question in this line of research. Political scientist Samuel L. Popkin coined the expression "low information voters" to describe most voters who know very little about politics or the world in general.
In practice, a "low information voter" may not be aware of legislation that their representative has sponsored in Congress. A low-information voter may base their ballot box decision on a media sound-bite, or a flier received in the mail. It is possible for a media sound-bite or campaign flier to present a political position for the incumbent candidate that completely contradicts the legislative action taken in the Capitol on behalf of the constituents. It may only take a small percentage of the overall voting group who base their decision on the inaccurate information to form a voter bloc large enough to swing an overall election result. When this happens, the constituency at large may have been duped or fooled. Nevertheless, the election result is legal and confirmed. Savvy Political consultants will take advantage of low-information voters and sway their votes with disinformation and fake news because it can be easier and sufficiently effective. Fact checkers have come about in recent years to help counter the effects of such campaign tactics.
== Psychological aspects ==
Psychology has long studied the non-logical aspects of argumentation. For example, studies have shown that simple repetition of an idea is often a more effective method of argumentation than appeals to reason. Propaganda often utilizes repetition. "Repeat a lie often enough and it becomes the truth" is a law of propaganda often attributed to the Nazi politician Joseph Goebbels. Nazi rhetoric has been studied extensively as, inter alia, a repetition campaign.
Empirical studies of communicator credibility and attractiveness, sometimes labeled charisma, have also been tied closely to empirically-occurring arguments. Such studies bring argumentation within the ambit of persuasion theory and practice.
Some psychologists such as William J. McGuire believe that the syllogism is the basic unit of human reasoning. They have produced a large body of empirical work around McGuire's famous title "A Syllogistic Analysis of Cognitive Relationships". A central line of this way of thinking is that logic is contaminated by psychological variables such as "wishful thinking", in which subjects confound the likelihood of predictions with the desirability of the predictions. People hear what they want to hear and see what they expect to see. If planners want something to happen they see it as likely to happen. If they hope something will not happen, they see it as unlikely to happen. Thus smokers think that they personally will avoid cancer, promiscuous people practice unsafe sex, and teenagers drive recklessly.
== Theories ==
=== Argument fields ===
Stephen Toulmin and Charles Arthur Willard have championed the idea of argument fields, the former drawing upon Ludwig Wittgenstein's notion of language games, (Sprachspiel) the latter drawing from communication and argumentation theory, sociology, political science, and social epistemology. For Toulmin, the term "field" designates discourses within which arguments and factual claims are grounded. For Willard, the term "field" is interchangeable with "community", "audience", or "readership". Similarly, G. Thomas Goodnight has studied "spheres" of argument and sparked a large literature created by younger scholars responding to or using his ideas. The general tenor of these field theories is that the premises of arguments take their meaning from social communities.
=== Stephen E. Toulmin's contributions ===
One of the most influential theorists of argumentation was the philosopher and educator, Stephen Toulmin, who is known for creating the Toulmin model of argument. His book The Uses of Argument is regarded as a seminal contribution to argumentation theory.
==== Alternative to absolutism and relativism ====
Throughout many of his works, Toulmin pointed out that absolutism (represented by theoretical or analytic arguments) has limited practical value. Absolutism is derived from Plato's idealized formal logic, which advocates universal truth; accordingly, absolutists believe that moral issues can be resolved by adhering to a standard set of moral principles, regardless of context. By contrast, Toulmin contends that many of these so-called standard principles are irrelevant to real situations encountered by human beings in daily life.
To develop his contention, Toulmin introduced the concept of argument fields. In The Uses of Argument (1958), Toulmin claims that some aspects of arguments vary from field to field, and are hence called "field-dependent", while other aspects of argument are the same throughout all fields, and are hence called "field-invariant". The flaw of absolutism, Toulmin believes, lies in its unawareness of the field-dependent aspect of argument; absolutism assumes that all aspects of argument are field invariant.
In Human Understanding (1972), Toulmin suggests that anthropologists have been tempted to side with relativists because they have noticed the influence of cultural variations on rational arguments. In other words, the anthropologist or relativist overemphasizes the importance of the "field-dependent" aspect of arguments, and neglects or is unaware of the "field-invariant" elements. In order to provide solutions to the problems of absolutism and relativism, Toulmin attempts throughout his work to develop standards that are neither absolutist nor relativist for assessing the worth of ideas.
In Cosmopolis (1990), he traces philosophers' "quest for certainty" back to René Descartes and Thomas Hobbes, and lauds John Dewey, Wittgenstein, Martin Heidegger, and Richard Rorty for abandoning that tradition.
==== Toulmin model of argument ====
Arguing that absolutism lacks practical value, Toulmin aimed to develop a different type of argument, called practical arguments (also known as substantial arguments). In contrast to absolutists' theoretical arguments, Toulmin's practical argument is intended to focus on the justificatory function of argumentation, as opposed to the inferential function of theoretical arguments. Whereas theoretical arguments make inferences based on a set of principles to arrive at a claim, practical arguments first find a claim of interest, and then provide justification for it. Toulmin believed that reasoning is less an activity of inference, involving the discovering of new ideas, and more a process of testing and sifting already existing ideas—an act achievable through the process of justification.
Toulmin believed that for a good argument to succeed, it needs to provide good justification for a claim. This, he believed, will ensure it stands up to criticism and earns a favourable verdict. In The Uses of Argument (1958), Toulmin proposed a layout containing six interrelated components for analyzing arguments:
Claim (Conclusion)
A conclusion whose merit must be established. In argumentative essays, it may be called the thesis. For example, if a person tries to convince a listener that he is a British citizen, the claim would be "I am a British citizen" (1).
Ground (Fact, Evidence, Data)
A fact one appeals to as a foundation for the claim. For example, the person introduced in 1 can support his claim with the supporting data "I was born in Bermuda" (2).
Warrant
A statement authorizing movement from the ground to the claim. In order to move from the ground established in 2, "I was born in Bermuda", to the claim in 1, "I am a British citizen", the person must supply a warrant to bridge the gap between 1 and 2 with the statement "A man born in Bermuda will legally be a British citizen" (3).
Backing
Credentials designed to certify the statement expressed in the warrant; backing must be introduced when the warrant itself is not convincing enough to the readers or the listeners. For example, if the listener does not deem the warrant in 3 as credible, the speaker will supply the legal provisions: "I trained as a barrister in London, specialising in citizenship, so I know that a man born in Bermuda will legally be a British citizen".
Rebuttal (Reservation)
Statements recognizing the restrictions which may legitimately be applied to the claim. It is exemplified as follows: "A man born in Bermuda will legally be a British citizen, unless he has betrayed Britain and has become a spy for another country".
Qualifier
Words or phrases expressing the speaker's degree of force or certainty concerning the claim. Such words or phrases include "probably", "possible", "impossible", "certainly", "presumably", "as far as the evidence goes", and "necessarily". The claim "I am definitely a British citizen" has a greater degree of force than the claim "I am a British citizen, presumably". (See also: Defeasible reasoning.)
The first three elements, claim, ground, and warrant, are considered as the essential components of practical arguments, while the second triad, qualifier, backing, and rebuttal, may not be needed in some arguments.
When Toulmin first proposed it, this layout of argumentation was based on legal arguments and intended to be used to analyze the rationality of arguments typically found in the courtroom. Toulmin did not realize that this layout could be applicable to the field of rhetoric and communication until his works were introduced to rhetoricians by Wayne Brockriede and Douglas Ehninger. Their Decision by Debate (1963) streamlined Toulmin's terminology and broadly introduced his model to the field of debate. Only after Toulmin published Introduction to Reasoning (1979) were the rhetorical applications of this layout mentioned in his works.
One criticism of the Toulmin model is that it does not fully consider the use of questions in argumentation. The Toulmin model assumes that an argument starts with a fact or claim and ends with a conclusion, but ignores an argument's underlying questions. In the example "Harry was born in Bermuda, so Harry must be a British subject", the question "Is Harry a British subject?" is ignored, which also neglects to analyze why particular questions are asked and others are not. (See Issue mapping for an example of an argument-mapping method that emphasizes questions.)
Toulmin's argument model has inspired research on, for example, goal structuring notation (GSN), widely used for developing safety cases, and argument maps and associated software.
==== Evolution of knowledge ====
In 1972, Toulmin published Human Understanding, in which he asserts that conceptual change is an evolutionary process. In this book, Toulmin attacks Thomas Kuhn's account of conceptual change in his seminal work The Structure of Scientific Revolutions (1962). Kuhn believed that conceptual change is a revolutionary process (as opposed to an evolutionary process), during which mutually exclusive paradigms compete to replace one another. Toulmin criticized the relativist elements in Kuhn's thesis, arguing that mutually exclusive paradigms provide no ground for comparison, and that Kuhn made the relativists' error of overemphasizing the "field variant" while ignoring the "field invariant" or commonality shared by all argumentation or scientific paradigms.
In contrast to Kuhn's revolutionary model, Toulmin proposed an evolutionary model of conceptual change comparable to Darwin's model of biological evolution. Toulmin states that conceptual change involves the process of innovation and selection. Innovation accounts for the appearance of conceptual variations, while selection accounts for the survival and perpetuation of the soundest conceptions. Innovation occurs when the professionals of a particular discipline come to view things differently from their predecessors; selection subjects the innovative concepts to a process of debate and inquiry in what Toulmin considers as a "forum of competitions". The soundest concepts will survive the forum of competition as replacements or revisions of the traditional conceptions.
From the absolutists' point of view, concepts are either valid or invalid regardless of contexts. From the relativists' perspective, one concept is neither better nor worse than a rival concept from a different cultural context. From Toulmin's perspective, the evaluation depends on a process of comparison, which determines whether or not one concept will improve explanatory power more than its rival concepts.
=== Pragma-dialectics ===
Scholars at the University of Amsterdam in the Netherlands have pioneered a rigorous modern version of dialectic under the name pragma-dialectics. The intuitive idea is to formulate clear-cut rules that, if followed, will yield reasonable discussion and sound conclusions. Frans H. van Eemeren, the late Rob Grootendorst, and many of their students and co-authors have produced a large body of work expounding this idea.
The dialectical conception of reasonableness is given by ten rules for critical discussion, all being instrumental for achieving a resolution of the difference of opinion (from Van Eemeren, Grootendorst, & Snoeck Henkemans, 2002, p. 182–183). The theory postulates this as an ideal model, and not something one expects to find as an empirical fact. The model can however serve as an important heuristic and critical tool for testing how reality approximates this ideal and point to where discourse goes wrong, that is, when the rules are violated. Any such violation will constitute a fallacy. Albeit not primarily focused on fallacies, pragma-dialectics provides a systematic approach to deal with them in a coherent way.
Van Eemeren and Grootendorst identified four stages of argumentative dialogue. These stages can be regarded as an argument protocol. In a somewhat loose interpretation, the stages are as follows:
Confrontation stage: Presentation of the difference of opinion, such as a debate question or a political disagreement.
Opening stage: Agreement on material and procedural starting points, the mutually acceptable common ground of facts and beliefs, and the rules to be followed during the discussion (such as, how evidence is to be presented, and determination of closing conditions).
Argumentation stage: Presentation of reasons for and against the standpoint(s) at issue, through application of logical and common-sense principles according to the agreed-upon rules
Concluding stage: Determining whether the standpoint has withstood reasonable criticism, and accepting it is justified. This occurs when the termination conditions are met (Among these could be, for example, a time limitation or the determination of an arbiter.)
Van Eemeren and Grootendorst provide a detailed list of rules that must be applied at each stage of the protocol. Moreover, in the account of argumentation given by these authors, there are specified roles of protagonist and antagonist in the protocol which are determined by the conditions which set up the need for argument.
=== Walton's logical argumentation method ===
Douglas N. Walton developed a distinctive philosophical theory of logical argumentation built around a set of practical methods to help a user identify, analyze and evaluate arguments in everyday conversational discourse and in more structured areas such as debate, law and scientific fields. There are four main components: argumentation schemes, dialogue structures, argument mapping tools, and formal argumentation systems. The method uses the notion of commitment in dialogue as the fundamental tool for the analysis and evaluation of argumentation rather than the notion of belief. Commitments are statements that the agent has expressed or formulated, and has pledged to carry out, or has publicly asserted. According to the commitment model, agents interact with each other in a dialogue in which each takes its turn to contribute speech acts. The dialogue framework uses critical questioning as a way of testing plausible explanations and finding weak points in an argument that raise doubt concerning the acceptability of the argument.
Walton's logical argumentation model took a view of proof and justification different from analytic philosophy's dominant epistemology, which was based on a justified true belief framework. In the logical argumentation approach, knowledge is seen as form of belief commitment firmly fixed by an argumentation procedure that tests the evidence on both sides, and uses standards of proof to determine whether a proposition qualifies as knowledge. In this evidence-based approach, knowledge must be seen as defeasible.
== Artificial intelligence ==
Efforts have been made within the field of artificial intelligence to perform and analyze argumentation with computers. Argumentation has been used to provide a proof-theoretic semantics for non-monotonic logic, starting with the influential work of Dung (1995). Computational argumentation systems have found particular application in domains where formal logic and classical decision theory are unable to capture the richness of reasoning, domains such as law and medicine. In Elements of Argumentation, Philippe Besnard and Anthony Hunter show how classical logic-based techniques can be used to capture key elements of practical argumentation.
Within computer science, the ArgMAS workshop series (Argumentation in Multi-Agent Systems), the CMNA workshop series, and the COMMA Conference, are regular annual events attracting participants from every continent. The journal Argument & Computation is dedicated to exploring the intersection between argumentation and computer science. ArgMining is a workshop series dedicated specifically to the related argument mining task.
Data from the collaborative structured online argumentation platform Kialo has been used to train and to evaluate natural language processing AI systems such as, most commonly, BERT and its variants. This includes argument extraction, conclusion generation, argument form quality assessment, machine argumentative debate generation or participation, surfacing most relevant previously overlooked viewpoints or arguments, argumentative writing support (including sentence attackability scores), automatic real-time evaluation of how truthful or convincing a sentence is (similar to fact-checking), language model fine tuning (including for chatbots), argument impact prediction, argument classification and polarity prediction.
== See also ==
== References ==
== Further reading ==
=== Flagship journals ===
Argumentation
Argumentation in Context
Informal Logic
Argumentation and Advocacy (formerly Journal of the American Forensic Association)
Social Epistemology
Episteme: A Journal of Social Epistemology
Journal of Argument and Computation | Wikipedia/Argumentation_theory |
An online integrated development environment, also known as a web IDE or cloud IDE, is an integrated development environment that can be accessed from a web browser. Online IDEs can be used without downloads or installation, instead operating fully within modern web browsers such as Firefox, Google Chrome or Microsoft Edge. Online IDEs can enable software development on low-powered devices that are normally unsuitable. An online IDE does not usually contain all of the same features as a traditional desktop IDE, only basic IDE features such as a source-code editor with syntax highlighting. Integrated version control and Read–Eval–Print Loop (REPL) may also be included.
== Notable examples ==
Coder
Cloud9 IDE
Codeanywhere
CodePen
Eclipse Che
Glitch
GitHub Codespaces
JSFiddle
Project IDX
Replit
SourceLair
StackBlitz
Visual Studio Code
JDoodle
== References == | Wikipedia/Online_integrated_development_environment |
A game engine (game environment) is a specialized development environment for creating video games. The features one provides depends on the type and the granularity of control allowed by the underlying framework. Some may provide diagrams, a windowing environment and debugging facilities. Users build the game with the game IDE, which may incorporate a game engine or call it externally. Game IDEs are typically specialized and tailored to work with one specific game engine.
This is not to be confused with game environment art, which is "the setting or location in which [a] game takes place." This is also in distinction from domain-specific entertainment languages, where all is needed is a text editor. They are distinct from integrated development environments which are more general, and may provide different sets of features.
There is also a distinction from Visual programming language in that programming languages are more general than Game Engines.
== Examples ==
Below are some game engines and frameworks which come with specialized IDEs.
3D Game Creation System
Adventure Game Studio
Blender Game Engine (discontinued)
Buildbox
Construct
Clickteam Fusion
CryEngine
FPS Creator
Game Core
Game Editor
GameMaker
Gamut from CMU (not Stanford)
Gamestudio
GDevelop
Godot
Goji Editor
GameSalad
Magic Work Station
PlayCanvas
Roblox
RPG Maker
SdlBasic
SharpLudus
Stencyl
The 3D Gamemaker
Unity
Unreal Engine
Virtual Play Table
VASSAL
== References == | Wikipedia/Game_integrated_development_environment |
Model-driven engineering (MDE) is a software development methodology that focuses on creating and exploiting domain models, which are conceptual models of all the topics related to a specific problem. Hence, it highlights and aims at abstract representations of the knowledge and activities that govern a particular application domain, rather than the computing (i.e. algorithmic) concepts.
MDE is a subfield of a software design approach referred as round-trip engineering. The scope of the MDE is much wider than that of the Model-Driven Architecture.
== Overview ==
The MDE approach is meant to increase productivity by maximizing compatibility between systems (via reuse of standardized models), simplifying the process of design (via models of recurring design patterns in the application domain), and promoting communication between individuals and teams working on the system (via a standardization of the terminology and the best practices used in the application domain). For instance, in model-driven development, technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model.
A modeling paradigm for MDE is considered effective if its models make sense from the point of view of a user that is familiar with the domain, and if they can serve as a basis for implementing systems. The models are developed through extensive communication among product managers, designers, developers and users of the application domain. As the models approach completion, they enable the development of software and systems.
Some of the better known MDE initiatives are:
The Object Management Group (OMG) initiative Model-Driven Architecture (MDA) which is leveraged by several of their standards such as Meta-Object Facility, XMI, CWM, CORBA, Unified Modeling Language (to be more precise, the OMG currently promotes the use of a subset of UML called fUML together with its action language, ALF, for model-driven architecture; a former approach relied on Executable UML and OCL, instead), and QVT.
The Eclipse "eco-system" of programming and modelling tools represented in general terms by the (Eclipse Modeling Framework). This framework allows the creation of tools implementing the MDA standards of the OMG; but, it is also possible to use it to implement other modeling-related tools.
== History ==
The first tools to support MDE were the Computer-Aided Software Engineering (CASE) tools developed in the 1980s. Companies like Integrated Development Environments (IDE – StP), Higher Order Software (now Hamilton Technologies, Inc., HTI), Cadre Technologies, Bachman Information Systems, and Logic Works (BP-Win and ER-Win) were pioneers in the field.
The US government got involved in the modeling definitions creating the IDEF specifications. With several variations of the modeling definitions (see Booch, Rumbaugh, Jacobson, Gane and Sarson, Harel, Shlaer and Mellor, and others) they were eventually joined creating the Unified Modeling Language (UML). Rational Rose, a product for UML implementation, was done by Rational Corporation (Booch) responding automation yield higher levels of abstraction in software development. This abstraction promotes simpler models with a greater focus on problem space. Combined with executable semantics this elevates the total level of automation possible. The Object Management Group (OMG) has developed a set of standards called Model-Driven Architecture (MDA), building a foundation for this advanced architecture-focused approach.
== Advantages ==
According to Douglas C. Schmidt, model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.
== Tools ==
Notable software tools for model-driven engineering include:
== See also ==
Application lifecycle management (ALM)
Business Process Model and Notation (BPMN)
Business-driven development (BDD)
Domain-driven design (DDD)
Domain-specific language (DSL)
Domain-specific modeling (DSM)
Domain-specific multimodeling
Language-oriented programming (LOP)
List of Unified Modeling Language tools
Model transformation (e.g. using QVT)
Model-based testing (MBT)
Modeling Maturity Level (MML)
Model-based systems engineering (MBSE)
Service-oriented modeling Framework (SOMF)
Software factory (SF)
Story-driven modeling (SDM)
Open API, open source specification for description of models and operations for HTTP interoperation and REST APIc
== References ==
== Further reading ==
David S. Frankel, Model Driven Architecture: Applying MDA to Enterprise Computing, John Wiley & Sons, ISBN 0-471-31920-1
Marco Brambilla, Jordi Cabot, Manuel Wimmer, Model Driven Software Engineering in Practice, foreword by Richard Soley (OMG Chairman), Morgan & Claypool, USA, 2012, Synthesis Lectures on Software Engineering #1. 182 pages. ISBN 9781608458820 (paperback), ISBN 9781608458837 (ebook). https://www.mdse-book.com
da Silva, Alberto Rodrigues (2015). "Model-Driven Engineering: A Survey Supported by a Unified Conceptual Model". Computer Languages, Systems & Structures. 43 (43): 139–155. doi:10.1016/j.cl.2015.06.001.
== External links ==
Model-Driven Architecture: Vision, Standards And Emerging Technologies at omg.org | Wikipedia/Model-driven_development |
Dynamic systems development method (DSDM) is an agile project delivery framework, initially used as a software development method. First released in 1994, DSDM originally sought to provide some discipline to the rapid application development (RAD) method. In later versions the DSDM Agile Project Framework was revised and became a generic approach to project management and solution delivery rather than being focused specifically on software development and code creation and could be used for non-IT projects. The DSDM Agile Project Framework covers a wide range of activities across the whole project lifecycle and includes strong foundations and governance, which set it apart from some other Agile methods. The DSDM Agile Project Framework is an iterative and incremental approach that embraces principles of Agile development, including continuous user/customer involvement.
DSDM fixes cost, quality and time at the outset and uses the MoSCoW prioritisation of scope into musts, shoulds, coulds and will not haves to adjust the project deliverable to meet the stated time constraint. DSDM is one of a number of agile methods for developing software and non-IT solutions, and it forms a part of the Agile Alliance.
In 2014, DSDM released the latest version of the method in the 'DSDM Agile Project Framework'. At the same time the new DSDM manual recognised the need to operate alongside other frameworks for service delivery (esp. ITIL) PRINCE2, Managing Successful Programmes, and PMI. The previous version (DSDM 4.2) had only contained guidance on how to use DSDM with extreme programming.
== History ==
In the early 1990s, rapid application development (RAD) was spreading across the IT industry. The user interfaces for software applications were moving from the old green screens to the graphical user interfaces that are used today. New application development tools were coming on the market, such as PowerBuilder. These enabled developers to share their proposed solutions much more easily with their customers – prototyping became a reality and the frustrations of the classical, sequential (waterfall) development methods could be put to one side.
However, the RAD movement was very unstructured: there was no commonly agreed definition of a suitable process and many organizations came up with their own definition and approach. Many major corporations were very interested in the possibilities but they were also concerned that they did not lose the level of quality in the end deliverables that free-flow development could give rise to
The DSDM Consortium was founded in 1994 by an association of vendors and experts in the field of software engineering and was created with the objective of "jointly developing and promoting an independent RAD framework" by combining their best practice experiences. The origins were an event organized by the Butler Group in London. People at that meeting all worked for blue-chip organizations such as British Airways, American Express, Oracle, and Logica (other companies such as Data Sciences and Allied Domecq have since been absorbed by other organizations).
In July 2006, DSDM Public Version 4.2 was made available for individuals to view and use; however, anyone reselling DSDM must still be a member of the not-for-profit consortium.
In 2014, the DSDM handbook was made available online and public. Additionally, templates for DSDM can be downloaded.
In October 2016 the DSDM Consortium rebranded as the Agile Business Consortium (ABC). The Agile Business Consortium is a not-for-profit, vendor-independent organisation which owns and administers the DSDM framework.
== Description ==
DSDM is a vendor-independent approach that recognises that more projects fail because of people problems than technology. DSDM's focus is on helping people to work effectively together to achieve the business goals. DSDM is also independent of tools and techniques enabling it to be used in any business and technical environment without tying the business to a particular vendor.
=== Principles ===
There are eight principles underpinning DSDM. These principles direct the team in the attitude they must take and the mindset they must adopt to deliver consistently.
Focus on the business need
Deliver on time
Collaborate
Never compromise quality
Build incrementally from firm foundations
Develop iteratively
Communicate continuously and clearly
Demonstrate control
=== Core techniques ===
Timeboxing: is the approach for completing the project incrementally by breaking it down into splitting the project in portions, each with a fixed budget and a delivery date. For each portion a number of requirements are prioritised and selected. Because time and budget are fixed, the only remaining variables are the requirements. So if a project is running out of time or money the requirements with the lowest priority are omitted. This does not mean that an unfinished product is delivered, because of the Pareto principle that 80% of the project comes from 20% of the system requirements, so as long as those most important 20% of requirements are implemented into the system, the system therefore meets the business needs and that no system is built perfectly in the first try.
MoSCoW: is a technique for prioritising work items or requirements. It is an acronym that stands for:
Must have
Should have
Could have
Won't have
Prototyping: refers to the creation of prototypes of the system under development at an early stage of the project. It enables the early discovery of shortcomings in the system and allows future users to 'test-drive' the system. This way good user involvement is realised, one of the key success factors of DSDM, or any system development project for that matter.
Testing: helps ensure a solution of good quality, DSDM advocates testing throughout each iteration. Since DSDM is a tool and technique independent method, the project team is free to choose its own test management method.
Workshop: brings project stakeholders together to discuss requirements, functionalities and mutual understanding.
Modeling: helps visualise a business domain and improve understanding. Produces a diagrammatic representation of specific aspects of the system or business area that is being developed.
Configuration management: with multiple deliverables under development at the same time and being delivered incrementally at the end of each time-box, the deliverables need to be well managed towards completion.
=== Roles ===
There are some roles introduced within DSDM environment. It is important that the project members need to be appointed to different roles before they commence the project. Each role has its own responsibility. The roles are:
Executive sponsor: So called the project champion. An important role from the user organisation who has the ability and responsibility to commit appropriate funds and resources. This role has an ultimate power to make decisions.
Visionary: The one who has the responsibility to initialise the project by ensuring that essential requirements are found early on. Visionary has the most accurate perception of the business objectives of the system and the project. Another task is to supervise and keep the development process in the right track.
Ambassador user: Brings the knowledge of the user community into the project, ensures that the developers receive enough user feedback during the development process.
Advisor user: Can be any user that represents an important viewpoint and brings daily knowledge of the project.
Project manager: Can be anyone from the user community or IT staff who manages the project in general.
Technical co-ordinator: Responsible in designing the system architecture and control the technical quality of the project.
Team leader: Leads their team and ensures that the team works effectively as a whole.
Solution developer: Interpret the system requirements and model it including developing the deliverable codes and build the prototypes.
Solution tester: Checks the correctness in a technical extent by performing some testing, raise defects where necessary and retest once fixed. Tester will have to provide some comment and documentation.
Scribe: Responsible for gathering and recording the requirements, agreements, and decisions made in every workshop.
Facilitator: Responsible for managing the workshops' progress, acts as a motivator for preparation and communication.
Specialist roles: Business architect, quality manager, system integrator, etc.
=== Critical success factors ===
Within DSDM a number of factors are identified as being of great importance to ensure successful projects.
Factor 1: First there is the acceptance of DSDM by senior management and other employees. This ensures that the different actors of the project are motivated from the start and remain involved throughout the project.
Factor 2: Directly derived from factor 1: The commitment of the management to ensure end-user involvement. The prototyping approach requires a strong and dedicated involvement by end users to test and judge the functional prototypes.
Factor 3: The project team has to be composed of skillful members that form a stable union. An important issue is the empowerment of the project team. This means that the team (or one or more of its members) has to possess the power and possibility to make important decisions regarding the project without having to write formal proposals to higher management, which can be very time-consuming. In order to enable the project team to run a successful project, they also need the appropriate technology to conduct the project. This means a development environment, project management tools, etc.
Factor 4: Finally, DSDM also states that a supportive relationship between customer and vendor is required. This goes for both projects that are realised internally within companies or by external contractors. An aid in ensuring a supporting relationship could be ISPL.
== Comparison to other development frameworks ==
DSDM can be considered as part of a broad range of iterative and incremental development frameworks, especially those supporting agile and object-oriented methods. These include (but are not limited to) scrum, extreme programming (XP), disciplined agile delivery (DAD), and rational unified process (RUP).
Like DSDM, these share the following characteristics:
They all prioritise requirements and work though them iteratively, building a system or product in increments.
They are tool-independent frameworks. This allows users to fill in the specific steps of the process with their own techniques and software aids of choice.
The variables in the development are not time/resources, but the requirements. This approach ensures the main goals of DSDM, namely to stay within the deadline and the budget.
A strong focus on communication between and the involvement of all the stakeholders in the system. Although this is addressed in other methods, DSDM strongly believes in commitment to the project to ensure a successful outcome.
== See also ==
Agile software development
Lean software development
== References ==
== Further reading ==
Coleman and Verbruggen: A quality software process for rapid application development, Software Quality Journal 7, p. 107-1222 (1998)
Beynon-Davies and Williams: The diffusion of information systems development methods, Journal of Strategic Information Systems 12 p. 29-46 (2003)
Sjaak Brinkkemper, Saeki and Harmsen: Assembly Techniques for Method Engineering, Advanced Information Systems Engineering, Proceedings of CaiSE'98, Springer Verlag (1998)
Abrahamsson, Salo, Ronkainen, Warsta Agile Software Development Methods: Review and Analysis, VTT Publications 478, p. 61-68 (2002)
Tuffs, Stapleton, West, Eason: Inter-operability of DSDM with the Rational Unified Process, DSDM Consortium, Issue 1, p. 1-29 (1999)
Rietmann: DSDM in a bird’s eye view, DSDM Consortium, p. 3-8 (2001)
Chris Barry, Kieran Conboy, Michael Lang, Gregory Wojtkowski and Wita Wojtkowski: Information Systems Development: Challenges in Practice, Theory, and Education, Volume 1
Keith Richards: Agile Project Management: running PRINCE2 projects with DSDM Atern, TSO (2007) Archived 2021-01-23 at the Wayback Machine
The DSDM Agile Project Framework (2014)
DSDM Agile Project Management Framework (v6, 2014) interactive mind map
== External links ==
The Agile Business Consortium (formerly, DSDM Consortium)
AgilePM wiki | Wikipedia/Dynamic_systems_development_method |
In software engineering, a software development process or software development life cycle (SDLC) is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.
Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming.
A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" is a particular instance as adopted by a specific organization. For example, many specific software development processes fit the spiral life-cycle model. The field is often considered a subset of the systems development life cycle.
== History ==
The software development methodology framework did not emerge until the 1960s. According to Elliott (2004), the systems development life cycle can be considered to be the oldest formalized methodology framework for building information systems. The main idea of the software development life cycle has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially" within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines."
Requirements gathering and analysis:
The first phase of the custom software development process involves understanding the client's requirements and objectives. This stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to identify the desired features, functionalities, and overall scope of the software. The development team works closely with the client to analyze existing systems and workflows, determine technical feasibility, and define project milestones.
Planning and design:
Once the requirements are understood, the custom software development team proceeds to create a comprehensive project plan. This plan outlines the development roadmap, including timelines, resource allocation, and deliverables. The software architecture and design are also established during this phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's usability, intuitiveness, and visual appeal.
Development:
With the planning and design in place, the development team begins the coding process. This phase involves writing, testing, and debugging the software code. Agile methodologies, such as scrum or kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular communication between the development team and the client ensures transparency and enables quick feedback and adjustments.
Testing and quality assurance:
To ensure the software's reliability, performance, and security, rigorous testing and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing, integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions as intended.
Deployment and implementation:
Once the software passes the testing phase, it is ready for deployment and implementation. The development team assists the client in setting up the software environment, migrating data if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth transition and enable users to maximize the software's potential.
Maintenance and support:
After the software is deployed, ongoing maintenance and support become crucial to address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and security patches are released to keep the software up-to-date and secure. This phase also involves providing technical support to end users and addressing their queries or concerns.
Methodologies, processes, and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include:
1970s
Structured programming since 1969
Cap Gemini SDM, originally from PANDATA, the first English translation was published in 1974. SDM stands for System Development Methodology
1980s
Structured systems analysis and design method (SSADM) from 1980 onwards
Information Requirement Analysis/Soft systems methodology
1990s
Object-oriented programming (OOP) developed in the early 1960s and became a dominant programming approach during the mid-1990s
Rapid application development (RAD), since 1991
Dynamic systems development method (DSDM), since 1994
Scrum, since 1995
Team software process, since 1998
Rational Unified Process (RUP), maintained by IBM since 1998
Extreme programming, since 1999
2000s
Agile Unified Process (AUP) maintained since 2005 by Scott Ambler
Disciplined agile delivery (DAD) Supersedes AUP
2010s
Scaled Agile Framework (SAFe)
Large-Scale Scrum (LeSS)
DevOps
Since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall or similar). Software process and software quality are closely interrelated; some unexpected facets and effects have been observed in practice.
Among these, another software development process has been established in open source. The adoption of these best practices known and established processes within the confines of a company is called inner source.
== Prototyping ==
Software prototyping is about creating prototypes, i.e. incomplete versions of the software program being developed.
The basic principles are:
Prototyping is not a standalone, complete development methodology, but rather an approach to try out particular features in the context of a full methodology (such as incremental, spiral, or rapid application development (RAD)).
Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease of change during the development process.
The client is involved throughout the development process, which increases the likelihood of client acceptance of the final implementation.
While some prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.
A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies.
== Methodologies ==
=== Agile development ===
"Agile software development" refers to a group of software development frameworks based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when the Agile Manifesto was formulated.
Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system.
The Agile model also includes the following software development processes:
Dynamic systems development method (DSDM)
Kanban
Scrum
Lean software development
=== Continuous integration ===
Continuous integration is the practice of merging all developer working copies to a shared mainline several times a day.
Grady Booch first named and proposed CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.
=== Incremental development ===
Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
There are three main variants of incremental development:
A series of mini-waterfalls are performed, where all phases of the waterfall are completed for a small part of a system, before proceeding to the next increment, or
Overall requirements are defined before proceeding to evolutionary, mini-waterfall development of individual increments of a system, or
The initial software concept, requirements analysis, and design of architecture and system core are defined via waterfall, followed by incremental implementation, which culminates in installing the final version, a working system.
=== Rapid application development ===
Rapid application development (RAD) is a software development methodology, which favors iterative development and the rapid construction of prototypes instead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster and makes it easier to change requirements.
The rapid development process starts with the development of preliminary data models and business process models using structured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".
The term was first used to describe a software development process introduced by James Martin in 1991. According to Whitten (2003), it is a merger of various structured techniques, especially data-driven information technology engineering, with prototyping techniques to accelerate software systems development.
The basic principles of rapid application development are:
Key objective is for fast development and delivery of a high-quality system at a relatively low investment cost.
Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease of change during the development process.
Aims to produce high-quality systems quickly, primarily via iterative Prototyping (at any stage of development), active user involvement, and computerized development tools. These tools may include graphical user interface (GUI) builders, Computer Aided Software Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation programming languages, code generators, and object-oriented techniques.
Key emphasis is on fulfilling the business need, while technological or engineering excellence is of lesser importance.
Project control involves prioritizing development and defining delivery deadlines or “timeboxes”. If the project starts to slip, the emphasis is on reducing requirements to fit the timebox, not on increasing the deadline.
Generally includes joint application design (JAD), where users are intensely involved in system design, via consensus building in either structured workshops, or electronically facilitated interaction.
Active user involvement is imperative.
Iteratively produces production software, as opposed to a throwaway prototype.
Produces documentation necessary to facilitate future development and maintenance.
Standard systems analysis and design methods can be fitted into this framework.
=== Waterfall development ===
The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically:
Requirements analysis resulting in a software requirements specification
Software design
Implementation
Testing
Integration, if there are multiple subsystems
Deployment (or Installation)
Maintenance
The first formal description of the method is often cited as an article published by Winston W. Royce in 1970, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.
The basic principles are:
The Project is divided into sequential phases, with some overlap and splashback acceptable between phases.
Emphasis is on planning, time schedules, target dates, budgets, and implementation of an entire system at one time.
Tight control is maintained over the life of the project via extensive written documentation, formal reviews, and approval/signoff by the user and information technology management occurring at the end of most phases before beginning the next phase. Written documentation is an explicit deliverable of each phase.
The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete. This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to the big design up front approach. Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development. See Criticism of waterfall model.
=== Spiral development ===
In 1988, Barry Boehm published a formal software system development "spiral model," which combines some key aspects of the waterfall model and rapid prototyping methodologies, in an effort to combine advantages of top-down and bottom-up concepts. It provided emphasis on a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems.
The basic principles are:
Focus is on risk assessment and on minimizing project risk by breaking a project into smaller segments and providing more ease-of-change during the development process, as well as providing the opportunity to evaluate risks and weigh consideration of project continuation throughout the life cycle.
"Each cycle involves a progression through the same sequence of steps, for each part of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program."
Each trip around the spiral traverses four basic quadrants: (1) determine objectives, alternatives, and constraints of the iteration, and (2) evaluate alternatives; Identify and resolve risks; (3) develop and verify deliverables from the iteration; and (4) plan the next iteration.
Begin each cycle with an identification of stakeholders and their "win conditions", and end each cycle with review and commitment.
=== Shape Up ===
Shape Up is a software development approach introduced by Basecamp in 2018. It is a set of principles and techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs, or sprints, unlike waterfall, agile, or scrum. Instead, those concepts are replaced with appetite, betting, and cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice and Block.
=== Advanced methodologies ===
Other high-level software project methodologies include:
Behavior-driven development and business process management.
Chaos model - The main rule always resolves the most important issue first.
Incremental funding methodology - an iterative approach
Lightweight methodology - a general term for methods that only have a few rules and practices
Structured systems analysis and design method - a specific version of waterfall
Slow programming, as part of the larger Slow Movement, emphasizes careful and gradual work without (or minimal) time pressures. Slow programming aims to avoid bugs and overly quick release schedules.
V-Model (software development) - an extension of the waterfall model
Unified Process (UP) is an iterative software development methodology framework, based on Unified Modeling Language (UML). UP organizes the development of software into four phases, each consisting of one or more executable iterations of the software at that stage of development: inception, elaboration, construction, and guidelines.
== Process meta-models ==
Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization.
ISO/IEC 12207 is the international standard describing the method to select, implement, and monitor the life cycle for software.
The Capability Maturity Model Integration (CMMI) is one of the leading models and is based on best practices. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMMI has replaced CMM.
ISO 9000 describes standards for a formally organized process to manufacture a product and the methods of managing and monitoring progress. Although the standard was originally created for the manufacturing sector, ISO 9000 standards have been applied to software development as well. Like CMMI, certification with ISO 9000 does not guarantee the quality of the end result, only that formalized business processes have been followed.
ISO/IEC 15504 Information technology—Process assessment is also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMMI. It models processes to manage, control, guide, and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team.
ISO/IEC 24744 Software Engineering—Metamodel for Development Methodologies, is a power type-based metamodel for software development methodologies.
Soft systems methodology - a general method for improving management processes.
Method engineering - a general method for improving information system processes.
== See also ==
Systems development life cycle
Computer-aided software engineering (some of these tools support specific methodologies)
List of software development philosophies
Outline of software engineering
Software Project Management
Software development
Software development effort estimation
Software documentation
Software release life cycle
Top-down and bottom-up design#Computer science
== References ==
== External links ==
Selecting a development approach Archived January 2, 2019, at the Wayback Machine at cms.hhs.gov.
Gerhard Fischer, "The Software Technology of the 21st Century: From Software Reuse to Collaborative Software Design" Archived September 15, 2009, at the Wayback Machine, 2001 | Wikipedia/Software_development_methodology |
Microsoft Solutions Framework (MSF) is a set of principles, models, disciplines, concepts, and guidelines for delivering information technology services from Microsoft. MSF is not limited to developing applications only; it is also applicable to other IT projects like deployment, networking or infrastructure projects. MSF does not force the developer to use a specific methodology (such as the waterfall model or agile software development).
== History ==
MSF was first introduced by Microsoft as version 1.0 in 1993, and a version 2.0 was released in 1997.
In 2002, MSF version 3.0 was released. It modified version 2.0 in the following ways:
Combined previously separate models into unified Team and Process models designed for application across a variety of project types including deployment, enterprise software integration, and development projects.
Folded the Application Development and Infrastructure Deployment models into a single Process Model consisting of five phases.
Added Project Management and Readiness Management Disciplines.
Made changes to the Risk Management Discipline.
Added links between MSF and the Microsoft Operations Framework (MOF).
Added an MSF Practitioner Program designed to train individuals to lead or participate in MSF projects.
MSF version 4.0 was released in 2005. The release was a major refresh of the Process Model (now called the Governance Model) and the Team Model. MSF 4.0 included techniques for two separate methodologies: MSF for Agile Software Development (MSF Agile) and MSF for CMMI Process Improvement (MSF4CMMI).
== Components ==
MSF 4.0 is a combination of a metamodel which can be used as a base for prescriptive software engineering processes, and two customizable and scalable software engineering processes. The MSF metamodel consists of foundational principles, a team model and cycles and iterations.
MSF 4.0 provides a higher-level framework of guidance and principles which can be mapped to a variety of prescriptive process templates. It is structured in both descriptive and prescriptive methodologies. The descriptive component is called the MSF 4.0 metamodel, which is a theoretical description of the SDLC best practices for creating SDLC methodologies. Microsoft is of the opinion that organizations have diverging dynamics and contrary priorities during their software development; some organizations need a responsive and adaptable software development environment, while others need a standardized, repeatable and more controlled environment. To fulfill these needs, Microsoft represents the metamodel of MSF 4.0 in two prescriptive methodology templates that provide specific process guidance, for agile software development (MSF4ASD) and for the Capability Maturity Model (MSF4CMMI). These software engineering processes can be modified and customized to the preferences of organization, customer and project team.
The MSF philosophy holds that there is no single structure or process that optimally applies to the requirements and environments for all sorts of projects. Therefore, MSF supports multiple process approaches, so it can be adapted to support any project, regardless of size or complexity. This flexibility means that it can support a wide degree of variation in the implementation of software engineering processes while retaining a set of core principles and mindsets.
The MSF process model consists of series of short development cycles and iterations. This model embraces rapid iterative development with continuous learning and refinement, due to progressive understanding of the business and project of the stakeholders. Identifying requirements, product development, and testing occur in overlapping iterations resulting in incremental completion to ensure a flow of value of the project. Each iteration has a different focus and result in a stable portion of the overall system.
== References ==
== External links ==
Microsoft Solution Framework home page
Microsoft Solution Framework in Visual Studio 2005 Team System
MSF Essentials book | Wikipedia/Microsoft_Solutions_Framework |
The following tables list notable software packages that are nominal IDEs; standalone tools such as source-code editors and GUI builders are not included. These IDEs are listed in alphabetic order of the supported language.
== ActionScript ==
== Ada ==
== Assembly ==
== BASIC ==
== C/C++ ==
== C# ==
== COBOL ==
== Common Lisp ==
== Component Pascal ==
== D ==
== Eiffel ==
== Erlang ==
Go to this page: Source code editors for Erlang
== Fortran ==
== F# ==
== Groovy ==
== Haskell ==
== Haxe ==
Go to this page: Comparison of IDE choices for Haxe programmers
== Java ==
Java has strong IDE support, due not only to its historical and economic importance, but also due to a combination of reflection and static-typing making it well-suited for IDE support.
Some of the leading Java IDEs (such as IntelliJ and Eclipse) are also the basis for leading IDEs in other programming languages (e.g. for Python, IntelliJ is rebranded as PyCharm, and Eclipse has the PyDev plugin.)
=== Open ===
=== Closed ===
== JavaScript ==
== Julia ==
== Lua ==
== Pascal, Object Pascal ==
== Perl ==
== PHP ==
== Python ==
== R ==
== Racket ==
== Ruby ==
== Rust ==
== Scala ==
== Smalltalk ==
== Tcl ==
== Unclassified ==
IBM Rational Business Developer
Mule (software)
== Visual Basic .NET ==
== See also ==
Comparison of assemblers
Graphical user interface builder
List of compilers
Source-code editor
Game integrated development environment
== References == | Wikipedia/Comparison_of_integrated_development_environments |
A method in object-oriented programming (OOP) is a procedure associated with an object, and generally also a message. An object consists of state data and behavior; these compose an interface, which specifies how the object may be used. A method is a behavior of an object parametrized by a user.
Data is represented as properties of the object, and behaviors are represented as methods. For example, a Window object could have methods such as open and close, while its state (whether it is open or closed at any given point in time) would be a property.
In class-based programming, methods are defined within a class, and objects are instances of a given class. One of the most important capabilities that a method provides is method overriding - the same name (e.g., area) can be used for multiple different kinds of classes. This allows the sending objects to invoke behaviors and to delegate the implementation of those behaviors to the receiving object. A method in Java programming sets the behavior of a class object. For example, an object can send an area message to another object and the appropriate formula is invoked whether the receiving object is a rectangle, circle, triangle, etc.
Methods also provide the interface that other classes use to access and modify the properties of an object; this is known as encapsulation. Encapsulation and overriding are the two primary distinguishing features between methods and procedure calls.
== Overriding and overloading ==
Method overriding and overloading are two of the most significant ways that a method differs from a conventional procedure or function call. Overriding refers to a subclass redefining the implementation of a method of its superclass. For example, findArea may be a method defined on a shape class, triangle, etc. would each define the appropriate formula to calculate their area. The idea is to look at objects as "black boxes" so that changes to the internals of the object can be made with minimal impact on the other objects that use it. This is known as encapsulation and is meant to make code easier to maintain and re-use.
Method overloading, on the other hand, refers to differentiating the code used to handle a message based on the parameters of the method. If one views the receiving object as the first parameter in any method then overriding is just a special case of overloading where the selection is based only on the first argument. The following simple Java example illustrates the difference:
== Accessor, mutator and manager methods ==
Accessor methods are used to read the data values of an object. Mutator methods are used to modify the data of an object. Manager methods are used to initialize and destroy objects of a class, e.g. constructors and destructors.
These methods provide an abstraction layer that facilitates encapsulation and modularity. For example, if a bank-account class provides a getBalance() accessor method to retrieve the current balance (rather than directly accessing the balance data fields), then later revisions of the same code can implement a more complex mechanism for balance retrieval (e.g., a database fetch), without the dependent code needing to be changed. The concepts of encapsulation and modularity are not unique to object-oriented programming. Indeed, in many ways the object-oriented approach is simply the logical extension of previous paradigms such as abstract data types and structured programming.
=== Constructors ===
A constructor is a method that is called at the beginning of an object's lifetime to create and initialize the object, a process called construction (or instantiation). Initialization may include an acquisition of resources. Constructors may have parameters but usually do not return values in most languages. See the following example in Java:
=== Destructor ===
A Destructor is a method that is called automatically at the end of an object's lifetime, a process called Destruction. Destruction in most languages does not allow destructor method arguments nor return values. Destructors can be implemented so as to perform cleanup chores and other tasks at object destruction.
==== Finalizers ====
In garbage-collected languages, such as Java,: 26, 29 C#,: 208–209 and Python, destructors are known as finalizers. They have a similar purpose and function to destructors, but because of the differences between languages that utilize garbage-collection and languages with manual memory management, the sequence in which they are called is different.
== Abstract methods ==
An abstract method is one with only a signature and no implementation body. It is often used to specify that a subclass must provide an implementation of the method, as in an abstract class. Abstract methods are used to specify interfaces in some programming languages.
=== Example ===
The following Java code shows an abstract class that needs to be extended:
The following subclass extends the main class:
=== Reabstraction ===
If a subclass provides an implementation for an abstract method, another subclass can make it abstract again. This is called reabstraction.
In practice, this is rarely used.
==== Example ====
In C#, a virtual method can be overridden with an abstract method. (This also applies to Java, where all non-private methods are virtual.)
Interfaces' default methods can also be reabstracted, requiring subclasses to implement them. (This also applies to Java.)
== Class methods ==
Class methods are methods that are called on a class rather than an instance. They are typically used as part of an object meta-model. I.e, for each class, defined an instance of the class object in the meta-model is created. Meta-model protocols allow classes to be created and deleted. In this sense, they provide the same functionality as constructors and destructors described above. But in some languages such as the Common Lisp Object System (CLOS) the meta-model allows the developer to dynamically alter the object model at run time: e.g., to create new classes, redefine the class hierarchy, modify properties, etc.
== Special methods ==
Special methods are very language-specific and a language may support none, some, or all of the special methods defined here. A language's compiler may automatically generate default special methods or a programmer may be allowed to optionally define special methods. Most special methods cannot be directly called, but rather the compiler generates code to call them at appropriate times.
=== Static methods ===
Static methods are meant to be relevant to all the instances of a class rather than to any specific instance. They are similar to static variables in that sense. An example would be a static method to sum the values of all the variables of every instance of a class. For example, if there were a Product class it might have a static method to compute the average price of all products.
A static method can be invoked even if no instances of the class exist yet. Static methods are called "static" because they are resolved at compile time based on the class they are called on and not dynamically as in the case with instance methods, which are resolved polymorphically based on the runtime type of the object.
==== Examples ====
===== In Java =====
In Java, a commonly used static method is:
Math.max(double a, double b)
This static method has no owning object and does not run on an instance. It receives all information from its arguments.
=== Copy-assignment operators ===
Copy-assignment operators define actions to be performed by the compiler when a class object is assigned to a class object of the same type.
=== Operator methods ===
Operator methods define or redefine operator symbols and define the operations to be performed with the symbol and the associated method parameters. C++ example:
== Member functions in C++ ==
Some procedural languages were extended with object-oriented capabilities to leverage the large skill sets and legacy code for those languages but still provide the benefits of object-oriented development. Perhaps the most well-known example is C++, an object-oriented extension of the C programming language. Due to the design requirements to add the object-oriented paradigm on to an existing procedural language, message passing in C++ has some unique capabilities and terminologies. For example, in C++ a method is known as a member function. C++ also has the concept of virtual functions which are member functions that can be overridden in derived classes and allow for dynamic dispatch.
=== Virtual functions ===
Virtual functions are the means by which a C++ class can achieve polymorphic behavior. Non-virtual member functions, or regular methods, are those that do not participate in polymorphism.
C++ Example:
== See also ==
Property (programming)
Remote method invocation
Subroutine, also called subprogram, routine, procedure or function
== Notes ==
== References == | Wikipedia/Method_(computer_science) |
Prograph is a visual, object-oriented, dataflow, multiparadigm programming language that uses iconic symbols to represent actions to be taken on data. Commercial Prograph software development environments such as Prograph Classic and Prograph CPX were available for the Apple Macintosh and Windows platforms for many years but were eventually withdrawn from the market in the late 1990s. Support for the Prograph language on macOS has recently reappeared with the release of the Marten software development environment.
== History ==
Research on Prograph started at Acadia University in 1982 as a general investigation into dataflow languages, stimulated by a seminar on functional languages conducted by Michael Levin. Diagrams were used to clarify the discussion, leading to the insight: "since the diagrams are clearer than the code, why not make the diagrams themselves executable!" Thus Prograph - Programming in Graphics - was born as a visual dataflow language. This work was led by Dr. Tomasz Pietrzykowski, with Stan Matwin and Thomas Muldner co-authoring early papers. From 1983 to 1985, research prototypes were built on a Three Rivers PERQ graphics workstation (in Pascal, with the data visualized as fireballs moving down datalinks), and a VAX with a Tektronix terminal, and an experimental compiler was programmed in an IBM PC. This work was continued at Technical University of Nova Scotia by Pietrzykowski and Dr. Philip Cox, including a version done in Prolog.
In 1985, work began on a commercialisable prototype on the Macintosh, the only widely available, low-priced computer with high-level graphics support available at the time. In early 1986, this prototype was taken over by The Gunakara Sun Systems (later renamed to TGS Systems) for commercialisation, TGS formerly being a consulting firm formed by Pietrzykowski at Acadia University. Working with Pietrzykowski and Cox, Terry Kilshaw hired and managed the original development team, with Jim Laskey as the lead developer. In 1987 Mark Szpakowski suggested the merger of object-orientation with visual dataflow, creating an "objectflow" system. After almost four years of development, the first commercial release, v1.2, was introduced at the OOPSLA conference in New Orleans in October 1989. This product won the 1989 MacUser Editor's Choice Award for Best Development Tool. Version 2.0, released in July 1990, added a compiler to the system.
TGS changed its name to Prograph International (PI) in 1990. Although sales were slow, development of a new version, Prograph CPX (Cross-Platform eXtensions) was undertaken in 1992, that was intended to build fully cross-platform applications. This version was released in 1993, and was immediately followed by development of a client-server application framework. Despite increasing sales, the company was unable to sustain operating costs, and following a failed financing attempt in late 1994, went into receivership in early 1995.
As the receivership proceeded, the management and employees of PI formed a new company, Pictorius, which acquired the assets of PI. Shortly afterwards, development of a Windows version of Prograph CPX was begun. Although it was never formally released, versions of Windows Prograph were regularly made available to Prograph CPX customers, some of whom ported existing applications written in Macintosh Prograph, with varying degrees of success.
After management changes at the new company, emphasis shifted from tools development to custom programming and web application development. In April 2002 the web development part of the company was acquired by the Paragon Technology Group of Bermuda and renamed Paragon Canada. The Pictorius name and rights to the Prograph source code were retained by McLean Watson Capital, a Toronto-based investments firm which had heavily funded Pictorius. A reference to Pictorius appeared for a time on the former's Portfolio page, but has since disappeared. The Windows version of CPX was later released for free use, and was available for some time for download from the remnants of the Pictorius website (link below).
A group of Prograph users ("Prographers") calling themselves "The Open Prograph Initiative" (OPI) formed in the late 1990s with the goal of keeping Prograph viable in the face of OS advances by Apple and Microsoft. For a time, the group also sought to create a new open-source visual programming language to serve as Prograph's successor, but with the advent of Andescotia's Marten visual programming environment, participation in the group essentially ceased.
The Prograph language is supported by the Marten IDE from Andescotia Software.
== Description ==
During the 1970s program complexity was growing considerably, but the tools used to write programs were generally similar to those used in the 1960s. This led to problems when working on larger projects, which would become so complex that even simple changes could have side effects that are difficult to fully understand. Considerable research into the problem led many to feel that the problem was that existing programming systems focused on the logic of the program, while in reality the purpose of a program was to manipulate data. If the data being manipulated is the important aspect of the program, why isn't the data the "first class citizen" of the programming language? Working on that basis, a number of new programming systems evolved, including object-oriented programming and dataflow programming.
Prograph took these concept further, introducing a combination of object-oriented methodologies and a completely visual environment for programming. Objects are represented by hexagons with two sides, one containing the data fields, the other the methods that operate on them. Double-clicking on either side would open a window showing the details for that object; for instance, opening the variables side would show class variables at the top and instance variables below. Double-clicking the method side shows the methods implemented in this class, as well as those inherited from the superclass. When a method itself is double-clicked, it opens into another window displaying the logic.
In Prograph a method is represented by a series of icons, each icon containing an instructions (or group of them). Within each method the flow of data is represented by lines in a directed graph. Data flows in the top of the diagram, passes through various instructions, and eventually flows back out the bottom (if there is any output).
Several features of the Prograph system are evident in this picture of a database sorting operation. The upper bar shows that this method, concurrent sort, is being passed in a single parameter, A Database Object. This object is then fed, via the lines, into several operations. Three of these extract a named index (indexA etc.) from the object using the getter operation (the unconnected getter output passes on the "whole" object), and then passes the extracted index to a sort operation. The output of these sort operations are then passed, along with a reference to the original database, to the final operation, update database. The bar at the bottom of the picture represents the outputs of this method, and in this case there are no connections to it and so this method does not return a value. Also note that although this is a method of some class, there is no self; if self is needed, it can be provided as an input or looked up.
In a dataflow language the operations can take place as soon as they have valid inputs for all of their connections. That means, in traditional terms, that each operation in this method could be carried out at the same time. In the database example, all of the sorts could take place at the same time if the computer were capable of supplying the data. Dataflow languages tend to be inherently concurrent, meaning they are capable of running on multiprocessor systems "naturally", one of the reasons that it garnered so much interest in the 1980s.
Loops and branches are constructed by modifying operations with annotations. For instance, a loop that calls the doit method on a list of input data is constructed by first dragging in the doit operator, then attaching the loop modifier and providing the list as the input to the loop. Another annotation, "injection", allows the method itself to be provided as an input, making Prograph a dynamic language to some degree.
== Execution ==
The integrated Prograph development and execution environment also allowed for visual debugging. The usual breakpoint and single-step mechanisms were supported. Each operation in a data flow diagram was visually highlighted as it executed. A tooltip-like mechanism displayed data values when the mouse was held over a data-link when stopped in debug mode. Visual display of the execution stack allowed for both roll-back and roll-forward execution. For many users the visual execution aspects of the language were as important as its edit-time graphical facilities.
The most important run-time debugging feature was the ability to change the code on the fly while debugging. This allowed for bugs to be fixed while debugging without the need to recompile.
== See also ==
LabVIEW – System-design platform and development environment
PWCT – Visual programming language
Spreadsheet 2000 – a unique spreadsheet written in Prograph
== References ==
== Further reading ==
Cox, P. T.; Pietrzykowski, T. (1984), "Advanced Programming Aids in Prograph", Technical Report 8408, Halifax, Nova Scotia: School of Computer Science, Technical University of Nova Scotia.
Cox, P. T.; Mulligan, I. J. (1984), "Compiling the graphical functional language Prograph", Technical Report 8402, Halifax, Nova Scotia: School of Computer Science, Technical University of Nova Scotia.
Matwin, S.; Pietrzykowski, T. (1985), "Prograph: A Preliminary Report", Computer Languages, 10 (2): 91–126, doi:10.1016/0096-0551(85)90002-5.
Kilshaw, Terry (May 1991), "Prograph Primitives", MacTech Magazine, 7 (5).
Kilshaw, Terry (January 1992), "Prograph 2.5", MacTech Magazine, 8 (1).
Kilshaw, Terry (January 1993), "A Pictorial Button Class in Prograph", MacTech Magazine, 9 (1).
Kilshaw, Terry (March 1994), "A Review of Prograph CPX 1.0", MacTech Magazine, 10 (3): 64–74.
Schmucker, Kurt (November 1994), "Prograph CPX - A Tutorial", MacTech Magazine, 10 (11).
Schmucker, Kurt (January 1995), "Commands and Undo in Prograph CPX", MacTech Magazine, 11 (1).
Schmucker, Kurt (March 1995), "Filters & Sieves in Prograph CPX", MacTech Magazine, 11 (3).
Schmucker, Kurt (May 1995), "MacApp and Prograph CPX - A Comparison", MacTech Magazine, 11 (5).
Shafer, Dan (1994), The Power of Prograph CPX, U.S.A: The Reader Network, ISBN 1-881513-02-5.
Steinman, S. B.; Carver, K. G. (1995), Visual Programming with Prograph CPX, Manning, ISBN 978-1-884777-05-9
== External links ==
Prograph CPX - A Tutorial - an excellent article on the system, dating to shortly after the original release of Prograph CPX
Visual Programming Languages: A Survey - includes a short overview of the Prograph system
The Open Prograph Initiative - Home page of the group interested in creating an open source version of Prograph
The Computer Chronicles - Visual Programming Languages (1993) on YouTube | Wikipedia/Prograph |
Version control (also known as revision control, source control, and source code management) is the software engineering practice of controlling, organizing, and tracking different versions in history of computer files; primarily source code text files, but generally any type of file.
Version control is a component of software configuration management.
A version control system is a software tool that automates version control. Alternatively, version control is embedded as a feature of some systems such as word processors, spreadsheets, collaborative web docs, and content management systems, e.g., Wikipedia's page history.
Version control includes viewing old versions and enables reverting a file to a previous version.
== Overview ==
As teams develop software, it is common to deploy multiple versions of the same software, and for different developers to work on one or more different versions simultaneously. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).
At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the revision control process have been developed. This abstracts most operational steps (hides them from ordinary users).
Moreover, in software development, legal and business practice, and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated revision control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations.
Revision control may also track changes to configuration files, such as those typically stored in /etc or /usr/local/etc on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.
Many version control systems identify the version of a file as a number or letter, called the version number, version, revision number, revision, or revision level. For example, the first version of a file might be version 1. When the file is changed the next version is 2. Each version is associated with a timestamp and the person making the change. Revisions can be compared, restored, and, with some types of files, merged.
== History ==
IBM's OS/360 IEBUPDTE software update tool dates back to 1962, arguably a precursor to version control system tools. Two source management and version control packages that were heavily used by IBM 360/370 installations were The Librarian and Panvalet.
A full system designed for source code control was started in 1972: the Source Code Control System (SCCS), again for the OS/360. SCCS's user manual, especially the introduction, having been published on December 4, 1975, implied that it was the first deliberate revision control system. The Revision Control System (RCS) followed in 1982 and, later, Concurrent Versions System (CVS) added network and concurrent development features to RCS. After CVS, a dominant successor was Subversion, followed by the rise of distributed version control tools such as Git.
== Structure ==
Revision control manages changes to a set of data over time. These changes can be structured in various ways.
Often the data is thought of as a collection of many individual items, such as files or documents, and changes to individual files are tracked. This accords with intuitions about separate files but causes problems when identity changes, such as during renaming, splitting or merging of files. Accordingly, some systems such as Git, instead consider changes to the data as a whole, which is less intuitive for simple changes but simplifies more complex changes.
When data that is under revision control is modified, after being retrieved by checking out, this is not in general immediately reflected in the revision control system (in the repository), but must instead be checked in or committed. A copy outside revision control is known as a "working copy". As a simple example, when editing a computer file, the data stored in memory by the editing program is the working copy, which is committed by saving. Concretely, one may print out a document, edit it by hand, and only later manually input the changes into a computer and save it. For source code control, the working copy is instead a copy of all files in a particular revision, generally stored locally on the developer's computer; in this case saving the file only changes the working copy, and checking into the repository is a separate step.
If multiple people are working on a single data set or document, they are implicitly creating branches of the data (in their working copies), and thus issues of merging arise, as discussed below. For simple collaborative document editing, this can be prevented by using file locking or simply avoiding working on the same document that someone else is working on.
Revision control systems are often centralized, with a single authoritative data store, the repository, and check-outs and check-ins done with reference to this central repository. Alternatively, in distributed revision control, no single repository is authoritative, and data can be checked out and checked into any repository. When checking into a different repository, this is interpreted as a merge or patch.
=== Graph structure ===
In terms of graph theory, revisions are generally thought of as a line of development (the trunk) with branches off of this, forming a directed tree, visualized as one or more parallel lines of development (the "mainlines" of the branches) branching off a trunk. In reality the structure is more complicated, forming a directed acyclic graph, but for many purposes "tree with merges" is an adequate approximation.
Revisions occur in sequence over time, and thus can be arranged in order, either by revision number or timestamp. Revisions are based on past revisions, though it is possible to largely or completely replace an earlier revision, such as "delete all existing text, insert new text". In the simplest case, with no branching or undoing, each revision is based on its immediate predecessor alone, and they form a simple line, with a single latest version, the "HEAD" revision or tip. In graph theory terms, drawing each revision as a point and each "derived revision" relationship as an arrow (conventionally pointing from older to newer, in the same direction as time), this is a linear graph. If there is branching, so multiple future revisions are based on a past revision, or undoing, so a revision can depend on a revision older than its immediate predecessor, then the resulting graph is instead a directed tree (each node can have more than one child), and has multiple tips, corresponding to the revisions without children ("latest revision on each branch"). In principle the resulting tree need not have a preferred tip ("main" latest revision) – just various different revisions – but in practice one tip is generally identified as HEAD. When a new revision is based on HEAD, it is either identified as the new HEAD, or considered a new branch. The list of revisions from the start to HEAD (in graph theory terms, the unique path in the tree, which forms a linear graph as before) is the trunk or mainline. Conversely, when a revision can be based on more than one previous revision (when a node can have more than one parent), the resulting process is called a merge, and is one of the most complex aspects of revision control. This most often occurs when changes occur in multiple branches (most often two, but more are possible), which are then merged into a single branch incorporating both changes. If these changes overlap, it may be difficult or impossible to merge, and require manual intervention or rewriting.
In the presence of merges, the resulting graph is no longer a tree, as nodes can have multiple parents, but is instead a rooted directed acyclic graph (DAG). The graph is acyclic since parents are always backwards in time, and rooted because there is an oldest version. Assuming there is a trunk, merges from branches can be considered as "external" to the tree – the changes in the branch are packaged up as a patch, which is applied to HEAD (of the trunk), creating a new revision without any explicit reference to the branch, and preserving the tree structure. Thus, while the actual relations between versions form a DAG, this can be considered a tree plus merges, and the trunk itself is a line.
In distributed revision control, in the presence of multiple repositories these may be based on a single original version (a root of the tree), but there need not be an original root - instead there can be a separate root (oldest revision) for each repository. This can happen, for example, if two people start working on a project separately. Similarly, in the presence of multiple data sets (multiple projects) that exchange data or merge, there is no single root, though for simplicity one may think of one project as primary and the other as secondary, merged into the first with or without its own revision history.
== Specialized strategies ==
Engineering revision control developed from formalized processes based on tracking revisions of early blueprints or bluelines. This system of control implicitly allowed returning to an earlier state of the design, for cases in which an engineering dead-end was reached in the development of the design. A revision table was used to keep track of the changes made. Additionally, the modified areas of the drawing were highlighted using revision clouds.
=== In Business and Law ===
Version control is widespread in business and law. Indeed, "contract redline" and "legal blackline" are some of the earliest forms of revision control, and are still employed in business and law with varying degrees of sophistication. The most sophisticated techniques are beginning to be used for the electronic tracking of changes to CAD files (see product data management), supplanting the "manual" electronic implementation of traditional revision control.
== Source-management models ==
Traditional revision control systems use a centralized model where all the revision control functions take place on a shared server. If two developers try to change the same file at the same time, without some method of managing access the developers may end up overwriting each other's work. Centralized revision control systems solve this problem in one of two different "source management models": file locking and version merging.
=== Atomic operations ===
An operation is atomic if the system is left in a consistent state even if the operation is interrupted. The commit operation is usually the most critical in this sense. Commits tell the revision control system to make a group of changes final, and available to all users. Not all revision control systems have atomic commits; Concurrent Versions System lacks this feature.
=== File locking ===
The simplest method of preventing "concurrent access" problems involves locking files so that only one developer at a time has write access to the central "repository" copies of those files. Once one developer "checks out" a file, others can read that file, but no one else may change that file until that developer "checks in" the updated version (or cancels the checkout).
File locking has both merits and drawbacks. It can provide some protection against difficult merge conflicts when a user is making radical changes to many sections of a large file (or group of files). If the files are left exclusively locked for too long, other developers may be tempted to bypass the revision control software and change the files locally, forcing a difficult manual merge when the other changes are finally checked in. In a large organization, files can be left "checked out" and locked and forgotten about as developers move between projects - these tools may or may not make it easy to see who has a file checked out.
=== Version merging ===
Most version control systems allow multiple developers to edit the same file at the same time. The first developer to "check in" changes to the central repository always succeeds. The system may provide facilities to merge further changes into the central repository, and preserve the changes from the first developer when other developers check in.
Merging two files can be a very delicate operation, and usually possible only if the data structure is simple, as in text files. The result of a merge of two image files might not result in an image file at all. The second developer checking in the code will need to take care with the merge, to make sure that the changes are compatible and that the merge operation does not introduce its own logic errors within the files. These problems limit the availability of automatic or semi-automatic merge operations mainly to simple text-based documents, unless a specific merge plugin is available for the file types.
The concept of a reserved edit can provide an optional means to explicitly lock a file for exclusive write access, even when a merging capability exists.
=== Baselines, labels and tags ===
Most revision control tools will use only one of these similar terms (baseline, label, tag) to refer to the action of identifying a snapshot ("label the project") or the record of the snapshot ("try it with baseline X"). Typically only one of the terms baseline, label, or tag is used in documentation or discussion; they can be considered synonyms.
In most projects, some snapshots are more significant than others, such as those used to indicate published releases, branches, or milestones.
When both the term baseline and either of label or tag are used together in the same context, label and tag usually refer to the mechanism within the tool of identifying or making the record of the snapshot, and baseline indicates the increased significance of any given label or tag.
Most formal discussion of configuration management uses the term baseline.
== Distributed revision control ==
Distributed revision control systems (DRCS) take a peer-to-peer approach, as opposed to the client–server approach of centralized systems. Rather than a single, central repository on which clients synchronize, each peer's working copy of the codebase is a bona-fide repository.
Distributed revision control conducts synchronization by exchanging patches (change-sets) from peer to peer. This results in some important differences from a centralized system:
No canonical, reference copy of the codebase exists by default; only working copies.
Common operations (such as commits, viewing history, and reverting changes) are fast, because there is no need to communicate with a central server.: 7
Rather, communication is only necessary when pushing or pulling changes to or from other peers.
Each working copy effectively functions as a remote backup of the codebase and of its change-history, providing inherent protection against data loss.: 4
== Best practices ==
Following best practices is necessary to obtain the full benefits of version control. Best practice may vary by version control tool and the field to which version control is applied. The generally accepted best practices in software development include: making incremental, small, changes; making commits which involve only one task or fix -- a corollary to this is to commit only code which works and does not knowingly break existing functionality; utilizing branching to complete functionality before release; writing clear and descriptive commit messages, make what why and how clear in either the commit description or the code; and using a consistent branching strategy. Other best software development practices such as code review and automated regression testing may assist in the following of version control best practices.
== Costs and benefits ==
Costs and benefits will vary dependent upon the version control tool chosen and the field in which it is applied. This section speaks to the field of software development, where version control is widely applied.
=== Costs ===
In addition to the costs of licensing the version control software, using version control requires time and effort. The concepts underlying version control must be understood and the technical particulars required to operate the version control software chosen must be learned. Version control best practices must be learned and integrated into the organization's existing software development practices. Management effort may be required to maintain the discipline needed to follow best practices in order to obtain useful benefit.
=== Benefits ===
==== Allows for reverting changes ====
A core benefit is the ability to keep history and revert changes, allowing the developer to easily undo changes. This gives the developer more opportunity to experiment, eliminating the fear of breaking existing code.
==== Branching simplifies deployment, maintenance and development ====
Branching assists with deployment. Branching and merging, the production, packaging, and labeling of source code patches and the easy application of patches to code bases, simplifies the maintenance and concurrent development of the multiple code bases associated with the various stages of the deployment process; development, testing, staging, production, etc.
==== Damage mitigation, accountability and process and design improvement ====
There can be damage mitigation, accountability, process and design improvement, and other benefits associated with the record keeping provided by version control, the tracking of who did what, when, why, and how.
When bugs arise, knowing what was done when helps with damage mitigation and recovery by assisting in the identification of what problems exist, how long they have existed, and determining problem scope and solutions. Previous versions can be installed and tested to verify conclusions reached by examination of code and commit messages.
==== Simplifies debugging ====
Version control can greatly simplify debugging. The application of a test case to multiple versions can quickly identify the change which introduced a bug. The developer need not be familiar with the entire code base and can focus instead on the code that introduced the problem.
==== Improves collaboration and communication ====
Version control enhances collaboration in multiple ways. Since version control can identify conflicting changes, i.e. incompatible changes made to the same lines of code, there is less need for coordination among developers.
The packaging of commits, branches, and all the associated commit messages and version labels, improves communication between developers, both in the moment and over time. Better communication, whether instant or deferred, can improve the code review process, the testing process, and other critical aspects of the software development process.
== Integration ==
Some of the more advanced revision-control tools offer many other facilities, allowing deeper integration with other tools and software-engineering processes.
=== Integrated development environment ===
Plugins are often available for IDEs such as Oracle JDeveloper, IntelliJ IDEA, Eclipse, Visual Studio, Delphi, NetBeans IDE, Xcode, and GNU Emacs (via vc.el). Advanced research prototypes generate appropriate commit messages.
== Common terminology ==
Terminology can vary from system to system, but some terms in common usage include:
=== Baseline ===
An approved revision of a document or source file to which subsequent changes can be made. See baselines, labels and tags.
=== Blame ===
A search for the author and revision that last modified a particular line.
=== Branch ===
A set of files under version control may be branched or forked at a point in time so that, from that time forward, two copies of those files may develop at different speeds or in different ways independently of each other.
=== Change ===
A change (or diff, or delta) represents a specific modification to a document under version control. The granularity of the modification considered a change varies between version control systems.
=== Change list ===
On many version control systems with atomic multi-change commits, a change list (or CL), change set, update, or patch identifies the set of changes made in a single commit. This can also represent a sequential view of the source code, allowing the examination of source as of any particular changelist ID.
=== Checkout ===
To check out (or co) is to create a local working copy from the repository. A user may specify a specific revision or obtain the latest. The term 'checkout' can also be used as a noun to describe the working copy. When a file has been checked out from a shared file server, it cannot be edited by other users. Think of it like a hotel, when you check out, you no longer have access to its amenities.
=== Clone ===
Cloning means creating a repository containing the revisions from another repository. This is equivalent to pushing or pulling into an empty (newly initialized) repository. As a noun, two repositories can be said to be clones if they are kept synchronized, and contain the same revisions.
=== Commit (noun) ===
=== Commit (verb) ===
To commit (check in, ci or, more rarely, install, submit or record) is to write or merge the changes made in the working copy back to the repository. A commit contains metadata, typically the author information and a commit message that describes the change.
=== Commit message ===
A short note, written by the developer, stored with the commit, which describes the commit. Ideally, it records why the modification was made, a description of the modification's effect or purpose, and non-obvious aspects of how the change works.
=== Conflict ===
A conflict occurs when different parties make changes to the same document, and the system is unable to reconcile the changes. A user must resolve the conflict by combining the changes, or by selecting one change in favour of the other.
=== Delta compression ===
Most revision control software uses delta compression, which retains only the differences between successive versions of files. This allows for more efficient storage of many different versions of files.
=== Dynamic stream ===
A stream in which some or all file versions are mirrors of the parent stream's versions.
=== Export ===
Exporting is the act of obtaining the files from the repository. It is similar to checking out except that it creates a clean directory tree without the version-control metadata used in a working copy. This is often used prior to publishing the contents, for example.
=== Fetch ===
See pull.
=== Forward integration ===
The process of merging changes made in the main trunk into a development (feature or team) branch.
=== Head ===
Also sometimes called tip, this refers to the most recent commit, either to the trunk or to a branch. The trunk and each branch have their own head, though HEAD is sometimes loosely used to refer to the trunk.
=== Import ===
Importing is the act of copying a local directory tree (that is not currently a working copy) into the repository for the first time.
=== Initialize ===
To create a new, empty repository.
=== Interleaved deltas ===
Some revision control software uses Interleaved deltas, a method that allows storing the history of text based files in a more efficient way than by using Delta compression.
=== Label ===
See tag.
=== Locking ===
When a developer locks a file, no one else can update that file until it is unlocked. Locking can be supported by the version control system, or via informal communications between developers (aka social locking).
=== Mainline ===
Similar to trunk, but there can be a mainline for each branch.
=== Merge ===
A merge or integration is an operation in which two sets of changes are applied to a file or set of files. Some sample scenarios are as follows:
A user, working on a set of files, updates or syncs their working copy with changes made, and checked into the repository, by other users.
A user tries to check in files that have been updated by others since the files were checked out, and the revision control software automatically merges the files (typically, after prompting the user if it should proceed with the automatic merge, and in some cases only doing so if the merge can be clearly and reasonably resolved).
A branch is created, the code in the files is independently edited, and the updated branch is later incorporated into a single, unified trunk.
A set of files is branched, a problem that existed before the branching is fixed in one branch, and the fix is then merged into the other branch. (This type of selective merge is sometimes known as a cherry pick to distinguish it from the complete merge in the previous case.)
=== Promote ===
The act of copying file content from a less controlled location into a more controlled location. For example, from a user's workspace into a repository, or from a stream to its parent.
=== Pull, push ===
Copy revisions from one repository into another. Pull is initiated by the receiving repository, while push is initiated by the source. Fetch is sometimes used as a synonym for pull, or to mean a pull followed by an update.
=== Pull request ===
=== Repository ===
=== Resolve ===
The act of user intervention to address a conflict between different changes to the same document.
=== Reverse integration ===
The process of merging different team branches into the main trunk of the versioning system.
=== Revision and version ===
A version is any change in form. In SVK, a Revision is the state at a point in time of the entire tree in the repository.
=== Share ===
The act of making one file or folder available in multiple branches at the same time. When a shared file is changed in one branch, it is changed in other branches.
=== Stream ===
A container for branched files that has a known relationship to other such containers. Streams form a hierarchy; each stream can inherit various properties (like versions, namespace, workflow rules, subscribers, etc.) from its parent stream.
=== Tag ===
A tag or label refers to an important snapshot in time, consistent across many files. These files at that point may all be tagged with a user-friendly, meaningful name or revision number. See baselines, labels and tags.
=== Trunk ===
The trunk is the unique line of development that is not a branch (sometimes also called Baseline, Mainline or Master)
=== Update ===
An update (or sync, but sync can also mean a combined push and pull) merges changes made in the repository (by other people, for example) into the local working copy. Update is also the term used by some CM tools (CM+, PLS, SMS) for the change package concept (see changelist). Synonymous with checkout in revision control systems that require each repository to have exactly one working copy (common in distributed systems)
=== Unlocking ===
Releasing a lock.
=== Working copy ===
The working copy is the local copy of files from a repository, at a specific time or revision. All work done to the files in a repository is initially done on a working copy, hence the name. Conceptually, it is a sandbox.
== See also ==
== Notes ==
== References ==
== External links ==
"Visual Guide to Version Control", Better explained.
Sink, Eric, "Source Control", SCM (how-to). The basics of version control. | Wikipedia/Version_control_system |
Version control (also known as revision control, source control, and source code management) is the software engineering practice of controlling, organizing, and tracking different versions in history of computer files; primarily source code text files, but generally any type of file.
Version control is a component of software configuration management.
A version control system is a software tool that automates version control. Alternatively, version control is embedded as a feature of some systems such as word processors, spreadsheets, collaborative web docs, and content management systems, e.g., Wikipedia's page history.
Version control includes viewing old versions and enables reverting a file to a previous version.
== Overview ==
As teams develop software, it is common to deploy multiple versions of the same software, and for different developers to work on one or more different versions simultaneously. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).
At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the revision control process have been developed. This abstracts most operational steps (hides them from ordinary users).
Moreover, in software development, legal and business practice, and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated revision control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations.
Revision control may also track changes to configuration files, such as those typically stored in /etc or /usr/local/etc on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.
Many version control systems identify the version of a file as a number or letter, called the version number, version, revision number, revision, or revision level. For example, the first version of a file might be version 1. When the file is changed the next version is 2. Each version is associated with a timestamp and the person making the change. Revisions can be compared, restored, and, with some types of files, merged.
== History ==
IBM's OS/360 IEBUPDTE software update tool dates back to 1962, arguably a precursor to version control system tools. Two source management and version control packages that were heavily used by IBM 360/370 installations were The Librarian and Panvalet.
A full system designed for source code control was started in 1972: the Source Code Control System (SCCS), again for the OS/360. SCCS's user manual, especially the introduction, having been published on December 4, 1975, implied that it was the first deliberate revision control system. The Revision Control System (RCS) followed in 1982 and, later, Concurrent Versions System (CVS) added network and concurrent development features to RCS. After CVS, a dominant successor was Subversion, followed by the rise of distributed version control tools such as Git.
== Structure ==
Revision control manages changes to a set of data over time. These changes can be structured in various ways.
Often the data is thought of as a collection of many individual items, such as files or documents, and changes to individual files are tracked. This accords with intuitions about separate files but causes problems when identity changes, such as during renaming, splitting or merging of files. Accordingly, some systems such as Git, instead consider changes to the data as a whole, which is less intuitive for simple changes but simplifies more complex changes.
When data that is under revision control is modified, after being retrieved by checking out, this is not in general immediately reflected in the revision control system (in the repository), but must instead be checked in or committed. A copy outside revision control is known as a "working copy". As a simple example, when editing a computer file, the data stored in memory by the editing program is the working copy, which is committed by saving. Concretely, one may print out a document, edit it by hand, and only later manually input the changes into a computer and save it. For source code control, the working copy is instead a copy of all files in a particular revision, generally stored locally on the developer's computer; in this case saving the file only changes the working copy, and checking into the repository is a separate step.
If multiple people are working on a single data set or document, they are implicitly creating branches of the data (in their working copies), and thus issues of merging arise, as discussed below. For simple collaborative document editing, this can be prevented by using file locking or simply avoiding working on the same document that someone else is working on.
Revision control systems are often centralized, with a single authoritative data store, the repository, and check-outs and check-ins done with reference to this central repository. Alternatively, in distributed revision control, no single repository is authoritative, and data can be checked out and checked into any repository. When checking into a different repository, this is interpreted as a merge or patch.
=== Graph structure ===
In terms of graph theory, revisions are generally thought of as a line of development (the trunk) with branches off of this, forming a directed tree, visualized as one or more parallel lines of development (the "mainlines" of the branches) branching off a trunk. In reality the structure is more complicated, forming a directed acyclic graph, but for many purposes "tree with merges" is an adequate approximation.
Revisions occur in sequence over time, and thus can be arranged in order, either by revision number or timestamp. Revisions are based on past revisions, though it is possible to largely or completely replace an earlier revision, such as "delete all existing text, insert new text". In the simplest case, with no branching or undoing, each revision is based on its immediate predecessor alone, and they form a simple line, with a single latest version, the "HEAD" revision or tip. In graph theory terms, drawing each revision as a point and each "derived revision" relationship as an arrow (conventionally pointing from older to newer, in the same direction as time), this is a linear graph. If there is branching, so multiple future revisions are based on a past revision, or undoing, so a revision can depend on a revision older than its immediate predecessor, then the resulting graph is instead a directed tree (each node can have more than one child), and has multiple tips, corresponding to the revisions without children ("latest revision on each branch"). In principle the resulting tree need not have a preferred tip ("main" latest revision) – just various different revisions – but in practice one tip is generally identified as HEAD. When a new revision is based on HEAD, it is either identified as the new HEAD, or considered a new branch. The list of revisions from the start to HEAD (in graph theory terms, the unique path in the tree, which forms a linear graph as before) is the trunk or mainline. Conversely, when a revision can be based on more than one previous revision (when a node can have more than one parent), the resulting process is called a merge, and is one of the most complex aspects of revision control. This most often occurs when changes occur in multiple branches (most often two, but more are possible), which are then merged into a single branch incorporating both changes. If these changes overlap, it may be difficult or impossible to merge, and require manual intervention or rewriting.
In the presence of merges, the resulting graph is no longer a tree, as nodes can have multiple parents, but is instead a rooted directed acyclic graph (DAG). The graph is acyclic since parents are always backwards in time, and rooted because there is an oldest version. Assuming there is a trunk, merges from branches can be considered as "external" to the tree – the changes in the branch are packaged up as a patch, which is applied to HEAD (of the trunk), creating a new revision without any explicit reference to the branch, and preserving the tree structure. Thus, while the actual relations between versions form a DAG, this can be considered a tree plus merges, and the trunk itself is a line.
In distributed revision control, in the presence of multiple repositories these may be based on a single original version (a root of the tree), but there need not be an original root - instead there can be a separate root (oldest revision) for each repository. This can happen, for example, if two people start working on a project separately. Similarly, in the presence of multiple data sets (multiple projects) that exchange data or merge, there is no single root, though for simplicity one may think of one project as primary and the other as secondary, merged into the first with or without its own revision history.
== Specialized strategies ==
Engineering revision control developed from formalized processes based on tracking revisions of early blueprints or bluelines. This system of control implicitly allowed returning to an earlier state of the design, for cases in which an engineering dead-end was reached in the development of the design. A revision table was used to keep track of the changes made. Additionally, the modified areas of the drawing were highlighted using revision clouds.
=== In Business and Law ===
Version control is widespread in business and law. Indeed, "contract redline" and "legal blackline" are some of the earliest forms of revision control, and are still employed in business and law with varying degrees of sophistication. The most sophisticated techniques are beginning to be used for the electronic tracking of changes to CAD files (see product data management), supplanting the "manual" electronic implementation of traditional revision control.
== Source-management models ==
Traditional revision control systems use a centralized model where all the revision control functions take place on a shared server. If two developers try to change the same file at the same time, without some method of managing access the developers may end up overwriting each other's work. Centralized revision control systems solve this problem in one of two different "source management models": file locking and version merging.
=== Atomic operations ===
An operation is atomic if the system is left in a consistent state even if the operation is interrupted. The commit operation is usually the most critical in this sense. Commits tell the revision control system to make a group of changes final, and available to all users. Not all revision control systems have atomic commits; Concurrent Versions System lacks this feature.
=== File locking ===
The simplest method of preventing "concurrent access" problems involves locking files so that only one developer at a time has write access to the central "repository" copies of those files. Once one developer "checks out" a file, others can read that file, but no one else may change that file until that developer "checks in" the updated version (or cancels the checkout).
File locking has both merits and drawbacks. It can provide some protection against difficult merge conflicts when a user is making radical changes to many sections of a large file (or group of files). If the files are left exclusively locked for too long, other developers may be tempted to bypass the revision control software and change the files locally, forcing a difficult manual merge when the other changes are finally checked in. In a large organization, files can be left "checked out" and locked and forgotten about as developers move between projects - these tools may or may not make it easy to see who has a file checked out.
=== Version merging ===
Most version control systems allow multiple developers to edit the same file at the same time. The first developer to "check in" changes to the central repository always succeeds. The system may provide facilities to merge further changes into the central repository, and preserve the changes from the first developer when other developers check in.
Merging two files can be a very delicate operation, and usually possible only if the data structure is simple, as in text files. The result of a merge of two image files might not result in an image file at all. The second developer checking in the code will need to take care with the merge, to make sure that the changes are compatible and that the merge operation does not introduce its own logic errors within the files. These problems limit the availability of automatic or semi-automatic merge operations mainly to simple text-based documents, unless a specific merge plugin is available for the file types.
The concept of a reserved edit can provide an optional means to explicitly lock a file for exclusive write access, even when a merging capability exists.
=== Baselines, labels and tags ===
Most revision control tools will use only one of these similar terms (baseline, label, tag) to refer to the action of identifying a snapshot ("label the project") or the record of the snapshot ("try it with baseline X"). Typically only one of the terms baseline, label, or tag is used in documentation or discussion; they can be considered synonyms.
In most projects, some snapshots are more significant than others, such as those used to indicate published releases, branches, or milestones.
When both the term baseline and either of label or tag are used together in the same context, label and tag usually refer to the mechanism within the tool of identifying or making the record of the snapshot, and baseline indicates the increased significance of any given label or tag.
Most formal discussion of configuration management uses the term baseline.
== Distributed revision control ==
Distributed revision control systems (DRCS) take a peer-to-peer approach, as opposed to the client–server approach of centralized systems. Rather than a single, central repository on which clients synchronize, each peer's working copy of the codebase is a bona-fide repository.
Distributed revision control conducts synchronization by exchanging patches (change-sets) from peer to peer. This results in some important differences from a centralized system:
No canonical, reference copy of the codebase exists by default; only working copies.
Common operations (such as commits, viewing history, and reverting changes) are fast, because there is no need to communicate with a central server.: 7
Rather, communication is only necessary when pushing or pulling changes to or from other peers.
Each working copy effectively functions as a remote backup of the codebase and of its change-history, providing inherent protection against data loss.: 4
== Best practices ==
Following best practices is necessary to obtain the full benefits of version control. Best practice may vary by version control tool and the field to which version control is applied. The generally accepted best practices in software development include: making incremental, small, changes; making commits which involve only one task or fix -- a corollary to this is to commit only code which works and does not knowingly break existing functionality; utilizing branching to complete functionality before release; writing clear and descriptive commit messages, make what why and how clear in either the commit description or the code; and using a consistent branching strategy. Other best software development practices such as code review and automated regression testing may assist in the following of version control best practices.
== Costs and benefits ==
Costs and benefits will vary dependent upon the version control tool chosen and the field in which it is applied. This section speaks to the field of software development, where version control is widely applied.
=== Costs ===
In addition to the costs of licensing the version control software, using version control requires time and effort. The concepts underlying version control must be understood and the technical particulars required to operate the version control software chosen must be learned. Version control best practices must be learned and integrated into the organization's existing software development practices. Management effort may be required to maintain the discipline needed to follow best practices in order to obtain useful benefit.
=== Benefits ===
==== Allows for reverting changes ====
A core benefit is the ability to keep history and revert changes, allowing the developer to easily undo changes. This gives the developer more opportunity to experiment, eliminating the fear of breaking existing code.
==== Branching simplifies deployment, maintenance and development ====
Branching assists with deployment. Branching and merging, the production, packaging, and labeling of source code patches and the easy application of patches to code bases, simplifies the maintenance and concurrent development of the multiple code bases associated with the various stages of the deployment process; development, testing, staging, production, etc.
==== Damage mitigation, accountability and process and design improvement ====
There can be damage mitigation, accountability, process and design improvement, and other benefits associated with the record keeping provided by version control, the tracking of who did what, when, why, and how.
When bugs arise, knowing what was done when helps with damage mitigation and recovery by assisting in the identification of what problems exist, how long they have existed, and determining problem scope and solutions. Previous versions can be installed and tested to verify conclusions reached by examination of code and commit messages.
==== Simplifies debugging ====
Version control can greatly simplify debugging. The application of a test case to multiple versions can quickly identify the change which introduced a bug. The developer need not be familiar with the entire code base and can focus instead on the code that introduced the problem.
==== Improves collaboration and communication ====
Version control enhances collaboration in multiple ways. Since version control can identify conflicting changes, i.e. incompatible changes made to the same lines of code, there is less need for coordination among developers.
The packaging of commits, branches, and all the associated commit messages and version labels, improves communication between developers, both in the moment and over time. Better communication, whether instant or deferred, can improve the code review process, the testing process, and other critical aspects of the software development process.
== Integration ==
Some of the more advanced revision-control tools offer many other facilities, allowing deeper integration with other tools and software-engineering processes.
=== Integrated development environment ===
Plugins are often available for IDEs such as Oracle JDeveloper, IntelliJ IDEA, Eclipse, Visual Studio, Delphi, NetBeans IDE, Xcode, and GNU Emacs (via vc.el). Advanced research prototypes generate appropriate commit messages.
== Common terminology ==
Terminology can vary from system to system, but some terms in common usage include:
=== Baseline ===
An approved revision of a document or source file to which subsequent changes can be made. See baselines, labels and tags.
=== Blame ===
A search for the author and revision that last modified a particular line.
=== Branch ===
A set of files under version control may be branched or forked at a point in time so that, from that time forward, two copies of those files may develop at different speeds or in different ways independently of each other.
=== Change ===
A change (or diff, or delta) represents a specific modification to a document under version control. The granularity of the modification considered a change varies between version control systems.
=== Change list ===
On many version control systems with atomic multi-change commits, a change list (or CL), change set, update, or patch identifies the set of changes made in a single commit. This can also represent a sequential view of the source code, allowing the examination of source as of any particular changelist ID.
=== Checkout ===
To check out (or co) is to create a local working copy from the repository. A user may specify a specific revision or obtain the latest. The term 'checkout' can also be used as a noun to describe the working copy. When a file has been checked out from a shared file server, it cannot be edited by other users. Think of it like a hotel, when you check out, you no longer have access to its amenities.
=== Clone ===
Cloning means creating a repository containing the revisions from another repository. This is equivalent to pushing or pulling into an empty (newly initialized) repository. As a noun, two repositories can be said to be clones if they are kept synchronized, and contain the same revisions.
=== Commit (noun) ===
=== Commit (verb) ===
To commit (check in, ci or, more rarely, install, submit or record) is to write or merge the changes made in the working copy back to the repository. A commit contains metadata, typically the author information and a commit message that describes the change.
=== Commit message ===
A short note, written by the developer, stored with the commit, which describes the commit. Ideally, it records why the modification was made, a description of the modification's effect or purpose, and non-obvious aspects of how the change works.
=== Conflict ===
A conflict occurs when different parties make changes to the same document, and the system is unable to reconcile the changes. A user must resolve the conflict by combining the changes, or by selecting one change in favour of the other.
=== Delta compression ===
Most revision control software uses delta compression, which retains only the differences between successive versions of files. This allows for more efficient storage of many different versions of files.
=== Dynamic stream ===
A stream in which some or all file versions are mirrors of the parent stream's versions.
=== Export ===
Exporting is the act of obtaining the files from the repository. It is similar to checking out except that it creates a clean directory tree without the version-control metadata used in a working copy. This is often used prior to publishing the contents, for example.
=== Fetch ===
See pull.
=== Forward integration ===
The process of merging changes made in the main trunk into a development (feature or team) branch.
=== Head ===
Also sometimes called tip, this refers to the most recent commit, either to the trunk or to a branch. The trunk and each branch have their own head, though HEAD is sometimes loosely used to refer to the trunk.
=== Import ===
Importing is the act of copying a local directory tree (that is not currently a working copy) into the repository for the first time.
=== Initialize ===
To create a new, empty repository.
=== Interleaved deltas ===
Some revision control software uses Interleaved deltas, a method that allows storing the history of text based files in a more efficient way than by using Delta compression.
=== Label ===
See tag.
=== Locking ===
When a developer locks a file, no one else can update that file until it is unlocked. Locking can be supported by the version control system, or via informal communications between developers (aka social locking).
=== Mainline ===
Similar to trunk, but there can be a mainline for each branch.
=== Merge ===
A merge or integration is an operation in which two sets of changes are applied to a file or set of files. Some sample scenarios are as follows:
A user, working on a set of files, updates or syncs their working copy with changes made, and checked into the repository, by other users.
A user tries to check in files that have been updated by others since the files were checked out, and the revision control software automatically merges the files (typically, after prompting the user if it should proceed with the automatic merge, and in some cases only doing so if the merge can be clearly and reasonably resolved).
A branch is created, the code in the files is independently edited, and the updated branch is later incorporated into a single, unified trunk.
A set of files is branched, a problem that existed before the branching is fixed in one branch, and the fix is then merged into the other branch. (This type of selective merge is sometimes known as a cherry pick to distinguish it from the complete merge in the previous case.)
=== Promote ===
The act of copying file content from a less controlled location into a more controlled location. For example, from a user's workspace into a repository, or from a stream to its parent.
=== Pull, push ===
Copy revisions from one repository into another. Pull is initiated by the receiving repository, while push is initiated by the source. Fetch is sometimes used as a synonym for pull, or to mean a pull followed by an update.
=== Pull request ===
=== Repository ===
=== Resolve ===
The act of user intervention to address a conflict between different changes to the same document.
=== Reverse integration ===
The process of merging different team branches into the main trunk of the versioning system.
=== Revision and version ===
A version is any change in form. In SVK, a Revision is the state at a point in time of the entire tree in the repository.
=== Share ===
The act of making one file or folder available in multiple branches at the same time. When a shared file is changed in one branch, it is changed in other branches.
=== Stream ===
A container for branched files that has a known relationship to other such containers. Streams form a hierarchy; each stream can inherit various properties (like versions, namespace, workflow rules, subscribers, etc.) from its parent stream.
=== Tag ===
A tag or label refers to an important snapshot in time, consistent across many files. These files at that point may all be tagged with a user-friendly, meaningful name or revision number. See baselines, labels and tags.
=== Trunk ===
The trunk is the unique line of development that is not a branch (sometimes also called Baseline, Mainline or Master)
=== Update ===
An update (or sync, but sync can also mean a combined push and pull) merges changes made in the repository (by other people, for example) into the local working copy. Update is also the term used by some CM tools (CM+, PLS, SMS) for the change package concept (see changelist). Synonymous with checkout in revision control systems that require each repository to have exactly one working copy (common in distributed systems)
=== Unlocking ===
Releasing a lock.
=== Working copy ===
The working copy is the local copy of files from a repository, at a specific time or revision. All work done to the files in a repository is initially done on a working copy, hence the name. Conceptually, it is a sandbox.
== See also ==
== Notes ==
== References ==
== External links ==
"Visual Guide to Version Control", Better explained.
Sink, Eric, "Source Control", SCM (how-to). The basics of version control. | Wikipedia/Version_control |
A graphical user interface builder (or GUI builder), also known as GUI designer or sometimes RAD IDE, is a software development tool that simplifies the creation of GUIs by allowing the designer to arrange graphical control elements (often called widgets) using a drag-and-drop WYSIWYG editor. Without a GUI builder, a GUI must be built by manually specifying each widget's parameters in the source code, with no visual feedback until the program is run. Such tools are usually called the term RAD IDE.
User interfaces are commonly programmed using an event-driven architecture, so GUI builders also simplify creating event-driven code. This supporting code connects software widgets with the outgoing and incoming events that trigger the functions providing the application logic.
Some graphical user interface builders automatically generate all the source code for a graphical control element. Others, like Interface Builder or Glade Interface Designer, generate serialized object instances that are then loaded by the application.
== List of GUI builders ==
=== C language based ===
GTK / Glade Interface Designer
Motif
XForms (toolkit) fdesign
Intrinsics
=== C# based ===
UWP / Windows Presentation Foundation / WinForms
Microsoft Visual Studio XAML Editor, XAML based GUI layout
Microsoft Expression Blend
SharpDevelop
Xamarin.Forms / .NET Core
Xamarin Studio
=== C++ based ===
UWP / Windows Presentation Foundation / WinForms
Microsoft Visual Studio XAML Editor, XAML based GUI layout
Microsoft Blend
Qt (toolkit)
Qt Creator
FLTK
FLUID
JUCE
U++
wxWidgets
wxFormBuilder
=== Objective-C / Swift based ===
Cocoa (modern) and Carbon (deprecated).
Xcode
GNUstep (formerly OpenStep)
Gorm
=== Java based ===
Android Studio, XML-based GUI layout
NetBeans GUI design tool
=== HTML/JavaScript based ===
Adobe Dreamweaver — Obsolete as of 2022
=== Object Pascal based ===
Delphi / VCL (Visual Component Library)
Lazarus / LCL (Lazarus Component Library)
=== Tk framework based ===
Tk (framework) for Tcl
ActiveState Komodo (No longer has a GUI builder)
TKproE (TCL/TK Programming Environment)
=== Visual Basic based ===
UWP / Windows Presentation Foundation / WinForms
Microsoft Visual Studio XAML Editor, XAML based GUI layout
Microsoft Expression Blend
=== Other tools ===
Adobe Animate
App Inventor for Android
AutoIt
Axure RP
Creately
Embedded Wizard
GEM
Interface Builder
LucidChart
OpenWindows
Resource construction set
Stetic
Scaleform
Wavemaker
== List of development environments ==
=== IDEs with GUI builders (RAD IDEs) ===
4D
ActiveState Komodo (No longer has a GUI builder)
Android Studio
Anjuta
AutoIt3
C++Builder
Clarion
Code::Blocks
CodeLite
dBase
Delphi/RAD Studio
Embedded Wizard
Eclipse
Gambas
IntelliJ IDEA
InForm
JDeveloper
KDevelop
LabWindows/CVI
LANSA
Lazarus
Liberty BASIC
Microsoft Visual Studio
MonoDevelop
MSEide+MSEgui
MyEclipse
NetBeans
OutSystems
PascalABC.NET
Projucer
Purebasic
Qt Creator
SharpDevelop
Softwell Maker
U++
VB6
WinFBE
Xcode
Xojo
== See also ==
Rapid application development (RAD)
Human interface guidelines (HIG)
Human interface device
User interface markup language
User interface modeling
Design-Oriented Programming
Linux on the desktop
== References == | Wikipedia/Graphical_user_interface_builder |
Application-release automation (ARA) refers to the process of packaging and deploying an application or update of an application from development, across various environments, and ultimately to production. ARA solutions must combine the capabilities of deployment automation, environment management and modeling, and release coordination.
== Relationship with DevOps ==
ARA tools help cultivate DevOps best practices by providing a combination of automation, environment modeling and workflow-management capabilities. These practices help teams deliver software rapidly, reliably and responsibly. ARA tools achieve a key DevOps goal of implementing continuous delivery with a large quantity of releases quickly.
== Relationship with deployment ==
ARA is more than just software-deployment automation – it deploys applications using structured release-automation techniques that allow for an increase in visibility for the whole team. It combines workload automation and release-management tools as they relate to release packages, as well as movement through different environments within the DevOps pipeline. ARA tools help regulate deployments, how environments are created and deployed, and how and when releases are deployed.
== ARA Solutions ==
All ARA solutions must include capabilities in automation, environment modeling, and release coordination. Additionally, the solution must provide this functionality without reliance on other tools.
== References == | Wikipedia/Application-release_automation |
Application software is any computer program that is intended for end-user use – not operating, administering or programming the computer. An application (app, application program, software application) is any program that can be categorized as application software. Common types of applications include word processor, media player and accounting software.
The term application software refers to all applications collectively and can be used to differentiate from system and utility software.
Applications may be bundled with the computer and its system software or published separately. Applications may be proprietary or open-source.
The short term app (coined in 1981 or earlier) became popular with the 2008 introduction of the iOS App Store, to refer to applications for mobile devices such as smartphones and tablets. Later, with introduction of the Mac App Store (in 2010) and Windows Store (in 2011), the term was extended in popular use to include desktop applications.
== Terminology ==
The delineation between system software such as operating systems and application software is not exact and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separate piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable by the user, as in the case of software used to control a VCR, DVD player, or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management.
When used as an adjective, application is not restricted to mean: of or on application software. For example, concepts such as application programming interface (API), application server, application virtualization, application lifecycle management and portable application apply to all computer programs alike, not just application software.
=== Killer app ===
Sometimes a new and popular application arises that only runs on one platform that results in increasing the desirability of that platform. This is called a killer application or killer app, coined in the late 1980s. For example, VisiCalc was the first modern spreadsheet software for the Apple II and helped sell the then-new personal computers into offices. For the BlackBerry, it was its email software.
=== Platform specific naming ===
Some applications are available for multiple platforms while others only work on one and are thus called, for example, a geography application for Microsoft Windows, or an Android application for education, or a Linux game.
== Classification ==
There are many different and alternative ways to classify application software.
From the legal point of view, application software is mainly classified with a black-box approach, about the rights of its end-users or subscribers (with eventual intermediate and tiered subscription levels).
Software applications are also classified with respect to the programming language in which the source code is written or executed, and concerning their purpose and outputs.
=== By property and use rights ===
Application software is usually distinguished into two main classes: closed source vs open source software applications, and free or proprietary software applications.
Proprietary software is placed under the exclusive copyright, and a software license grants limited usage rights. The open-closed principle states that software may be "open only for extension, but not for modification". Such applications can only get add-ons from third parties.
Free and open-source software (FOSS) shall be run, distributed, sold, or extended for any purpose, and -being open- shall be modified or reversed in the same way.
FOSS software applications released under a free license may be perpetual and also royalty-free. Perhaps, the owner, the holder or third-party enforcer of any right (copyright, trademark, patent, or ius in re aliena) are entitled to add exceptions, limitations, time decays or expiring dates to the license terms of use.
Public-domain software is a type of FOSS which is royalty-free and - openly or reservedly- can be run, distributed, modified, reversed, republished, or created in derivative works without any copyright attribution and therefore revocation. It can even be sold, but without transferring the public domain property to other single subjects. Public-domain SW can be released under a (un)licensing legal statement, which enforces those terms and conditions for an indefinite duration (for a lifetime, or forever).
=== By coding language ===
Since the development and near-universal adoption of the web, an important distinction that has emerged, has been between web applications — written with HTML, JavaScript and other web-native technologies and typically requiring one to be online and running a web browser — and the more traditional native applications written in whatever languages are available for one's particular type of computer. There has been a contentious debate in the computing community regarding web applications replacing native applications for many purposes, especially on mobile devices such as smartphones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated.
=== By purpose and output ===
Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, for example word processors or databases. Vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every specific aspect possible of, for example, manufacturing or banking worker, accounting, or customer service.
There are many types of application software:
An application suite consists of multiple applications bundled together. They usually have related functions, features, and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.
Enterprise software addresses the needs of an entire organization's processes and data flows, across several departments, often in a large distributed environment. Examples include enterprise resource planning systems, customer relationship management (CRM) systems, data replication engines, and supply chain management software. Departmental Software is a sub-type of enterprise software with a focus on smaller organizations or groups within a large organization. (Examples include travel expense management and IT Helpdesk.)
Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include databases, email servers, and systems for managing networks and security.)
Application platform as a service (aPaaS) is a cloud computing service that offers development and deployment environments for application services.
Information worker software lets users create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, analytical, collaborative and documentation tools. Word processors, spreadsheets, email and blog clients, personal information systems, and individual media editors may aid in multiple information worker tasks.
Content access software is used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include media players, web browsers, and help browsers.)
Educational software is related to content access software, but has the content or features adapted for use by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.
Simulation software simulates physical or abstract systems for either research, training, or entertainment purposes.
Media development software generates print and electronic media for others to consume, most often in a commercial or educational setting. This includes graphic-art software, desktop publishing software, multimedia development software, HTML editors, digital-animation editors, digital audio and video composition, and many others.
Product engineering software is used in developing hardware and software products. This includes computer-aided design (CAD), computer-aided engineering (CAE), computer language editing and compiling tools, integrated development environments, and application programmer interfaces.
Entertainment Software can refer to video games, screen savers, programs to display motion pictures or play recorded music, and other forms of entertainment which can be experienced through the use of a computing device.
=== By platform ===
Applications can also be classified by computing platforms such as a desktop application for a particular operating system, delivery network such as in cloud computing and Web 2.0 applications, or delivery devices such as mobile apps for mobile devices.
The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via a command-line interface or graphical user interface. This does not include application software bundled within operating systems such as a software calculator or text editor.
=== Information worker software ===
Accounting software
Data management
Contact manager
Spreadsheet
Database software
Documentation
Document automation
Word processor
Desktop publishing software
Diagramming software
Presentation software
Email
Blog software
Enterprise resource planning
Financial software
Banking software
Clearing systems
Financial accounting software
Financial software
Field service management
Workforce management software
Project management software
Calendaring software
Employee scheduling software
Workflow software
Reservation systems
=== Entertainment software ===
Screen savers
Video games
Arcade video games
Console games
Mobile games
Personal computer games
Software art
Demo
64K intro
=== Educational software ===
Classroom management
Reference software
Sales readiness software
Survey management
Encyclopedia software
=== Enterprise infrastructure software ===
Artificial Intelligence for IT Operations (AIOps)
Business workflow software
Database management system (DBMS)
Digital asset management (DAM) software
Document management software
Geographic information system (GIS)
=== Simulation software ===
Computer simulators
Scientific simulators
Social simulators
Battlefield simulators
Emergency simulators
Vehicle simulators
Flight simulators
Driving simulators
Simulation games
Vehicle simulation games
=== Media development software ===
3D computer graphics software
Animation software
Graphic art software
Raster graphics editor
Vector graphics editor
Image organizer
Video editing software
Audio editing software
Digital audio workstation
Music sequencer
Scorewriter
HTML editor
Game development tool
=== Product engineering software ===
Hardware engineering
Computer-aided engineering
Computer-aided design (CAD)
Computer-aided manufacturing (CAM)
Finite element analysis
=== Software engineering ===
Compiler software
Integrated development environment
Compiler
Linker
Debugger
Version control
Game development tool
License manager
== See also ==
Software development – Creation and maintenance of software
Mobile app – Software application designed to run on mobile devices
Web application – Application that uses a web browser as a client
Server application – Computer to access a central resource or service on a networkPages displaying short descriptions of redirect targets
Super-app – Mobile application that provides multiple services including financial transactions
== References ==
== External links ==
Learning materials related to Application software at Wikiversity | Wikipedia/Application_software |
Network Security Toolkit (NST) is a Linux-based Live DVD/USB Flash Drive that provides a set of free and open-source computer security and networking tools to perform routine security and networking diagnostic and monitoring tasks. The distribution can be used as a network security analysis, validation and monitoring tool on servers hosting virtual machines. The majority of tools published in the article "Top 125 security tools" by Insecure.org are available in the toolkit. NST has package management capabilities similar to Fedora and maintains its own repository of additional packages.
== Features ==
Many tasks that can be performed within NST are available through a web interface called NST WUI. Among the tools that can be used through this interface are nmap with the vizualization tool ZenMap, ntop, a Network Interface Bandwidth Monitor, a Network Segment ARP Scanner, a session manager for VNC, a minicom-based terminal server, serial port monitoring, and WPA PSK management.
Other features include visualization of ntopng, ntop, wireshark, traceroute, NetFlow and kismet data by geolocating the host addresses, IPv4 Address conversation, traceroute data and wireless access points and displaying them via Google Earth or a Mercator World Map bit image, a browser-based packet capture and protocol analysis system capable of monitoring up to four network interfaces using Wireshark, as well as a Snort-based intrusion detection system with a "collector" backend that stores incidents in a MySQL database. For web developers, there is also a JavaScript console with a built-in object library with functions that aid the development of dynamic web pages.
=== Host Geolocations ===
The following example ntop host geolocation images were generated by NST.
=== Network Monitors ===
The following image depicts the interactive dynamic SVG/AJAX enabled Network Interface Bandwidth Monitor which is integrated into the NST WUI. Also shown is a Ruler Measurement tool overlay to perform time and bandwidth rate analysis.
== See also ==
BackTrack
Kali Linux
List of digital forensic tools
Computer Security
List of live CDs
== References ==
Smith, Jesse (2020-06-07). Distribution Release: Network Security Toolkit 32-11992. DistroWatch.
== External links ==
Official website
NST at SourceForge
Network Security Geolocation Matrix | Wikipedia/Network_Security_Toolkit |
In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on configurable security rules. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet or between several VLANs. Firewalls can be categorized as network-based or host-based.
== History ==
The term firewall originally referred to a wall to confine a fire within a line of adjacent buildings. Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. The term was applied in the 1980s to network technology that emerged when the Internet was fairly new in terms of its global use and connectivity. The predecessors to firewalls for network security were routers used in the 1980s. Because they already segregated networks, routers could filter packets crossing them.
Before it was used in real-life computing, the term appeared in John Badham's 1983 computer‑hacking movie WarGames, spoken by the bearded and bespectacled programmer named Paul Richter, which possibly inspired its later use.
One of the earliest commercially successful firewall and network address translation (NAT) products was the PIX (Private Internet eXchange) Firewall, invented in 1994 by Network Translation Inc., a startup founded and run by John Mayes. The PIX Firewall technology was coded by Brantley Coile as a consultant software developer. Recognizing the emerging IPv4 address depletion problem, they designed the PIX to enable organizations to securely connect private networks to the public internet using a limited number of registered IP addresses. The innovative PIX solution quickly gained industry acclaim, earning the prestigious "Hot Product of the Year" award from Data Communications Magazine in January 1995. Cisco Systems, seeking to expand into the rapidly growing network security market, subsequently acquired Network Translation Inc. in November 1995 to obtain the rights to the PIX technology. The PIX became one of Cisco's flagship firewall product lines before eventually being succeeded by the Adaptive Security Appliance (ASA) platform introduced in 2005.
== Types of firewalls ==
Firewalls are categorized as a network-based or a host-based system. Network-based firewalls are positioned between two or more networks, typically between the local area network (LAN) and wide area network (WAN), their basic function being to control the flow of data between connected networks. They are either a software appliance running on general-purpose hardware, a hardware appliance running on special-purpose hardware, or a virtual appliance running on a virtual host controlled by a hypervisor. Firewall appliances may also offer non-firewall functionality, such as DHCP or VPN services. Host-based firewalls are deployed directly on the host itself to control network traffic or other computing resources. This can be a daemon or service as a part of the operating system or an agent application for protection.
=== Packet filter ===
The first reported type of network firewall is called a packet filter which inspects packets transferred between computers. The firewall maintains an access-control list which dictates what packets will be looked at and what action should be applied, if any, with the default action set to silent discard. Three basic actions regarding the packet consist of a silent discard, discard with Internet Control Message Protocol or TCP reset response to the sender, and forward to the next hop. Packets may be filtered by source and destination IP addresses, protocol, or source and destination ports. The bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) in conjunction with well-known ports, enabling firewalls of that era to distinguish between specific types of traffic such as web browsing, remote printing, email transmission, and file transfers.
The first paper published on firewall technology was in 1987 when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first-generation architecture. In 1992, Steven McCanne and
Van Jacobson released a paper on BSD Packet Filter (BPF) while at Lawrence Berkeley Laboratory.
=== Connection tracking ===
From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways.
Second-generation firewalls perform the work of their first-generation predecessors but also maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 (transport layer) of the OSI model for their conversation, allowing examination of the overall exchange between the nodes.
=== Application layer ===
Marcus Ranum, Wei Xu, and Peter Churchyard released an application firewall known as Firewall Toolkit (FWTK) in October 1993. This became the basis for Gauntlet firewall at Trusted Information Systems.
The key benefit of application layer filtering is that it can understand certain applications and protocols such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext Transfer Protocol (HTTP). This allows it to identify unwanted applications or services using a non standard port, or detect if an allowed protocol is being abused. It can also provide unified security management including enforced encrypted DNS and virtual private networking.
As of 2012, the next-generation firewall provides a wider range of inspection at the application layer, extending deep packet inspection functionality to include, but is not limited to:
Web filtering
Intrusion prevention systems
User identity management
Web application firewall
Content inspection and heuristic analysis
TLS Inspection
==== Endpoint specific ====
Endpoint-based application firewalls function by determining whether a process should accept any given connection. Application firewalls filter connections by examining the process ID of data packets against a rule set for the local process involved in the data transmission. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers. Application firewalls that hook into socket calls are also referred to as socket filters.
== Firewall Policies ==
At the core of a firewall's operation are the policies that govern its decision-making process. These policies, collectively known as firewall rules, are the specific guidelines that determine the traffic allowed or blocked across a network's boundaries.
Firewall rules are based on the evaluation of network packets against predetermined security criteria. A network packet, which carries data across networks, must match certain attributes defined in a rule to be allowed through the firewall. These attributes commonly include:
Direction: Inbound or outbound traffic
Source: Where the traffic originates (IP address, range, network, or zone)
Destination: Where the traffic is headed (IP address, range, network, or zone)
Port: Network ports specific to various services (e.g., port 80 for HTTP)
Protocol: The type of network protocol (e.g., TCP, UDP, ICMP)
Applications: L7 inspection or grouping av services.
Action: Whether to allow, deny, drop, or require further inspection for the traffic
=== Zones ===
Zones are logical segments within a network that group together devices with similar security requirements. By partitioning a network into zones, such as "Technical", "WAN", "LAN", "Public," "Private," "DMZ", and "Wireless," administrators can enforce policies that control the flow of traffic between them. Each zone has its own level of trust and is governed by specific firewall rules that regulate the ingress and egress of data.
A typical default is to allow all traffic from LAN to WAN, and to drop all traffic from WAN to LAN.
=== Services ===
In networking terms, services are specific functions typically identified by a network port and protocol. Common examples include HTTP/HTTPS (web traffic) on ports 80 and 443, FTP (file transfer) on port 21, and SMTP (email) on port 25. Services are the engines behind the applications users depend on. From a security aspect, controlling access to services is crucial because services are common targets for exploitation. Firewalls employ rules that stipulate which services should be accessible, to whom, and in what context. For example, a firewall might be configured to block incoming FTP requests to prevent unauthorized file uploads but allow outgoing HTTPS requests for web browsing.
=== Applications ===
Applications refer to the software systems that users interact with while on the network. They can range from web browsers and email clients to complex database systems and cloud-based services. In network security, applications are important because different types of traffic can pose varying security risks. Thus, firewall rules can be crafted to identify and control traffic based on the application generating or receiving it. By using application awareness, firewalls can allow, deny, or limit traffic for specific applications according to organizational policies and compliance requirements, thereby mitigating potential threats from vulnerable or undesired applications.
Application can both be a grouping of services, or a L7 inspection.
=== USER ID ===
Implementing firewall rules based on IP addresses alone is often insufficient due to the dynamic nature of user location and device usage. User ID will be translate to a IP address.
This is where the concept of "User ID" makes a significant impact. User ID allows firewall rules to be crafted based on individual user identities, rather than just fixed source or destination IP addresses. This enhances security by enabling more granular control over who can access certain network resources, regardless of where they are connecting from or what device they are using.
The User ID technology is typically integrated into firewall systems through the use of directory services such as Active Directory, LDAP, RADIUS or TACACS+. These services link the user's login information to their network activities. By doing this, the firewall can apply rules and policies that correspond to user groups, roles, or individual user accounts instead of purely relying on the network topology.
==== Example of Using User ID in Firewall Rules ====
Consider a school that wants to restrict access to a social media server from students. They can create a rule in the firewall that utilises User ID information to enforce this policy.
Directory Service Configuration — First, the firewall must be configured to communicate with the directory service that stores user group memberships. In this case, an Active Directory server.
User Identification — The firewall maps network traffic to specific user IDs by interpreting authentication logs. When a user logs on, the firewall associates that login with the user's IP address.
Define User Groups — Within the firewall's management interface, define user groups based on the directory service. For example, create groups such as "Students".
Create Firewall Rule:
Source: User ID (e.g., Students)
Destination: list of IP addresses
Service/Application: Allowed services (e.g., HTTP, HTTPS)
Action: Deny
Implement Default Allow Rule:
Source: LAN zone
Destination: WAN zone
Service/Application: Any
Action: Allow
With this setup, only users who authenticate and are identified as members of "Students" are denied to access social media servers. All other traffic, starting from LAN interfaces, will be allowed.
== Most common firewall log types ==
Traffic Logs:
Description: Traffic logs record comprehensive details about data traversing the network. This includes source and destination IP addresses, port numbers, protocols used, and the action taken by the firewall (e.g., allow, drop, or reject).
Significance: Essential for network administrators to analyze and understand the patterns of communication between devices, aiding in troubleshooting and optimizing network performance.
Threat Prevention Logs:
Description: Logs specifically designed to capture information related to security threats. This encompasses alerts from intrusion prevention systems (IPS), antivirus events, anti-bot detections, and other threat-related data.
Significance: Vital for identifying and responding to potential security breaches, helping security teams stay proactive in safeguarding the network.
Audit Logs:
Description: Logs that record administrative actions and changes made to the firewall configuration. These logs are critical for tracking changes made by administrators for security and compliance purposes.
Significance: Supports auditing and compliance efforts by providing a detailed history of administrative activities, aiding in investigations and ensuring adherence to security policies.
Event Logs:
Description: General event logs that capture a wide range of events occurring on the firewall, helping administrators monitor and troubleshoot issues.
Significance: Provides a holistic view of firewall activities, facilitating the identification and resolution of any anomalies or performance issues within the network infrastructure.
Session Logs:
Description: Logs that provide information about established network sessions, including session start and end times, data transfer rates, and associated user or device information.
Significance: Useful for monitoring network sessions in real-time, identifying abnormal activities, and optimizing network performance.
DDoS Mitigation Logs:
Description: Logs that record events related to Distributed Denial of Service (DDoS) attacks, including mitigation actions taken by the firewall to protect the network.
Significance: Critical for identifying and mitigating DDoS attacks promptly, safeguarding network resources and ensuring uninterrupted service availability.
Geo-location Logs:
Description: Logs that capture information about the geographic locations of network connections. This can be useful for monitoring and controlling access based on geographical regions.
Significance: Aids in enhancing security by detecting and preventing suspicious activities originating from specific geographic locations, contributing to a more robust defense against potential threats.
URL Filtering Logs:
Description: Records data related to web traffic and URL filtering. This includes details about blocked and allowed URLs, as well as categories of websites accessed by users.
Significance: Enables organizations to manage internet access, enforce acceptable use policies, and enhance overall network security by monitoring and controlling web activity.
User Activity Logs:
Description: Logs that capture user-specific information, such as authentication events, user login/logout details, and user-specific traffic patterns.
Significance: Aids in tracking user behavior, ensuring accountability, and providing insights into potential security incidents involving specific users.
VPN Logs:
Description: Information related to Virtual Private Network (VPN) connections, including events like connection and disconnection, tunnel information, and VPN-specific errors.
Significance: Crucial for monitoring the integrity and performance of VPN connections, ensuring secure communication between remote users and the corporate network.
System Logs:
Description: Logs that provide information about the overall health, status, and configuration changes of the firewall system. This may include logs related to high availability (HA), software updates, and other system-level events.
Significance: Essential for maintaining the firewall infrastructure, diagnosing issues, and ensuring the system operates optimally.
Compliance Logs:
Description: Logs specifically focused on recording events relevant to regulatory compliance requirements. This may include activities ensuring compliance with industry standards or legal mandates.
Significance: Essential for organizations subject to specific regulations, helping to demonstrate adherence to compliance standards and facilitating audit processes.
== Configuration ==
Setting up a firewall is a complex and error-prone task. A network may face security issues due to configuration errors.
Firewall policy configuration is based on specific network type (e.g., public or private), and can be set up using firewall rules that either block or allow access to prevent potential attacks from hackers or malware.
== See also ==
== References ==
== External links ==
Evolution of the Firewall Industry – discusses different architectures, how packets are processed and provides a timeline of the evolution.
A History and Survey of Network Firewalls – provides an overview of firewalls at various ISO levels, with references to original papers where early firewall work was reported. | Wikipedia/Firewall_(networking) |
A network enclave is a section of an internal network that is subdivided from the rest of the network.
== Purpose ==
The purpose of a network enclave is to limit internal access to a portion of a network. It is necessary when the set of resources differs from those of the general network surroundings. Typically, network enclaves are not publicly accessible. Internal accessibility is restricted through the use of internal firewalls, VLANs, network access control and VPNs.
== Scenarios ==
Network Enclaves consist of standalone assets that do not interact with other information systems or networks. A major difference between a DMZ or demilitarized zone and a network enclave is a DMZ allows inbound and outbound traffic access, where firewall boundaries are traversed. In an enclave, firewall boundaries are not traversed. Enclave protection tools can be used to provide protection within specific security domains. These mechanisms are installed as part of an Intranet to connect networks that have similar security requirements.
== DMZ within an enclave ==
A DMZ can be established within an enclave to host publicly accessible systems. The ideal design is to build the DMZ on a separate network interface of the enclave perimeter firewall. All DMZ traffic would be routed through the firewall for processing and the DMZ would still be kept separate from the rest of the protected network.
== References == | Wikipedia/Network_enclave |
Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software. The technology differs from perimeter-based protections such as firewalls, that can only detect and block attacks by using network information without contextual awareness. RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering. RASP-protected applications rely less on external devices like firewalls to provide runtime security protection. When a threat is detected RASP can prevent exploitation and possibly take other actions, including terminating a user's session, shutting the application down, alerting security personnel and sending a warning to the user. RASP aims to close the gap left by application security testing and network perimeter controls, neither of which have enough insight into real-time data and event flows to either prevent vulnerabilities slipping through the review process or block new threats that were unforeseen during development.
== Implementation ==
RASP can be integrated as a framework or module that runs in conjunction with a program's codes, libraries and system calls. The technology can also be implemented as a virtualization. RASP is similar to interactive application security testing (IAST), the key difference is that IAST is focused on identifying vulnerabilities within the applications and RASPs are focused protecting against cybersecurity attacks that may take advantages of those vulnerabilities or other attack vectors.
== Deployment options ==
RASP solutions can be deployed in two different ways: monitor or protection mode. In monitor mode, the RASP solution reports on web application attacks but does not block any attack. In protection mode, the RASP solution reports and blocks web application attacks.
== Future Research ==
Pursue "integrated" approaches that support both development-time and runtime
Explore decentralized coordination, planning, and optimization approaches
Explore quantitative and qualitative approaches to assess overall security posture
== See also ==
Runtime verification
Runtime error detection
Dynamic program analysis
== References == | Wikipedia/Runtime_application_self-protection |
The internationalized domain name (IDN) homograph attack (sometimes written as homoglyph attack) is a method used by malicious parties to deceive computer users about what remote system they are communicating with, by exploiting the fact that many different characters look alike (i.e., they rely on homoglyphs to deceive visitors). For example, the Cyrillic, Greek and Latin alphabets each have a letter ⟨o⟩ that has the same shape but represents different sounds or phonemes in their respective writing systems.
This kind of spoofing attack is also known as script spoofing. Unicode incorporates numerous scripts (writing systems), and, for a number of reasons, similar-looking characters such as Greek Ο, Latin O, and Cyrillic О were not assigned the same code. Their incorrect or malicious usage is a possibility for security attacks. Thus, for example, a regular user of exаmple.com may be lured to click on it unquestioningly as an apparently familiar link, unaware that the third letter is not the Latin character "a" but rather the Cyrillic character "а" and is thus an entirely different domain from the intended one.
The registration of homographic domain names is akin to typosquatting, in that both forms of attacks use a similar-looking name to a more established domain to fool a user. The major difference is that in typosquatting the perpetrator attracts victims by relying on natural typographical errors commonly made when manually entering a URL, while in homograph spoofing the perpetrator deceives the victims by presenting visually indistinguishable hyperlinks. Indeed, it would be a rare accident for a web user to type, for example, a Cyrillic letter within an otherwise English word, turning "bank" into "bаnk". There are cases in which a registration can be both typosquatting and homograph spoofing; the pairs of l/I, i/j, and 0/O are all both close together on keyboards and, depending on the typeface, may be difficult or impossible to distinguish visually.
== History ==
An early nuisance of this kind, pre-dating the Internet and even text terminals, was the confusion between "l" (lowercase letter "L") / "1" (the number "one") and "O" (capital letter for vowel "o") / "0" (the number "zero"). Some typewriters in the pre-computer era even combined the L and the one; users had to type a lowercase L when the number one was needed. The zero/o confusion gave rise to the tradition of crossing zeros, so that a computer operator would type them correctly. Unicode may contribute to this greatly with its combining characters, accents, several types of hyphen, etc., often due to inadequate rendering support, especially with smaller font sizes and the wide variety of fonts.
Even earlier, handwriting provided rich opportunities for confusion. A notable example is the etymology of the word "zenith". The translation from the Arabic "samt" included the scribe's confusing of "m" into "ni". This was common in medieval blackletter, which did not connect the vertical columns on the letters i, m, n, or u, making them difficult to distinguish when several were in a row. The latter, as well as "rn"/"m"/"rri" ("RN"/"M"/"RRI") confusion, is still possible for a human eye even with modern advanced computer technology.
Intentional look-alike character substitution with different alphabets has also been known in various contexts. For example, Faux Cyrillic has been used as an amusement or attention-grabber and "Volapuk encoding", in which Cyrillic script is represented by similar Latin characters, was used in early days of the Internet as a way to overcome the lack of support for the Cyrillic alphabet. Another example is that vehicle registration plates can have both Cyrillic (for domestic usage in Cyrillic script countries) and Latin (for international driving) with the same letters. Registration plates that are issued in Greece are limited to using letters of the Greek alphabet that have homoglyphs in the Latin alphabet, as European Union regulations require the use of Latin letters.
== Homographs in ASCII ==
ASCII has several characters or pairs of characters that look alike and are known as homographs (or homoglyphs). Spoofing attacks based on these similarities are known as homograph spoofing attacks. For example, 0 (the number) and O (the letter), "l" lowercase "L", and "I" uppercase "i".
In a typical example of a hypothetical attack, someone could register a domain name that appears almost identical to an existing domain but goes somewhere else. For example, the domain "rnicrosoft.com" begins with "r" and "n", not "m".
Other examples are G00GLE.COM which looks much like GOOGLE.COM in some fonts.
Using a mix of uppercase and lowercase characters, googIe.com (capital i, not small L) looks much like google.com in some fonts. PayPal was a target of a phishing scam exploiting this, using the domain PayPaI.com. In certain narrow-spaced fonts such as Tahoma (the default in the address bar in Windows XP), placing a c in front of a j, l or i will produce homoglyphs such as cl cj ci (d g a).
== Homographs in internationalized domain names ==
In multilingual computer systems, different logical characters may have identical appearances.
For example, Unicode character U+0430, Cyrillic small letter a ("а"), can look identical to Unicode character U+0061, Latin small letter a, ("a") which is the lowercase "a" used in English. Hence wikipediа.org (xn--wikipedi-86g.org; the Cyrillic version) instead of wikipedia.org (the Latin version).
The problem arises from the different treatment of the characters in the user's mind and the computer's programming. From the viewpoint of the user, a Cyrillic "а" within a Latin string is a Latin "a"; there is no difference in the glyphs for these characters in most fonts. However, the computer treats them differently when processing the character string as an identifier. Thus, the user's assumption of a one-to-one correspondence between the visual appearance of a name and the named entity breaks down.
Internationalized domain names provide a backward-compatible way for domain names to use the full Unicode character set, and this standard is already widely supported. However this system expanded the character repertoire from a few dozen characters in a single alphabet to many thousands of characters in many scripts; this greatly increased the scope for homograph attacks.
This opens a rich vein of opportunities for phishing and other varieties of fraud. An attacker could register a domain name that looks just like that of a legitimate website, but in which some of the letters have been replaced by homographs in another alphabet. The attacker could then send e-mail messages purporting to come from the original site, but directing people to the bogus site. The spoof site could then record information such as passwords or account details, while passing traffic through to the real site. The victims may never notice the difference, until suspicious or criminal activity occurs with their accounts.
In December 2001 Evgeniy Gabrilovich and Alex Gontmakher, both from Technion, Israel, published a paper titled "The Homograph Attack", which described an attack that used Unicode URLs to spoof a website URL. To prove the feasibility of this kind of attack, the researchers successfully registered a variant of the domain name microsoft.com which incorporated Cyrillic characters.
Problems of this kind were anticipated before IDN was introduced, and guidelines were issued to registries to try to avoid or reduce the problem. For example, it was advised that registries only accept characters from the Latin alphabet and that of their own country, not all of Unicode characters, but this advice was neglected by major TLDs.
On February 6, 2005, Cory Doctorow reported that this exploit was disclosed by 3ric Johanson at the hacker conference Shmoocon. Web browsers supporting IDNA appeared to direct the URL http://www.pаypal.com/, in which the first a character is replaced by a Cyrillic а, to the site of the well known payment site PayPal, but actually led to a spoofed web site with different content. Popular browsers continued to have problems properly displaying international domain names through April 2017.
The following alphabets have characters that can be used for spoofing attacks (please note, these are only the most obvious and common, given artistic license and how much risk the spoofer will take of getting caught; the possibilities are far more numerous than can be listed here):
=== Cyrillic ===
Cyrillic is, by far, the most commonly used alphabet for homoglyphs, largely because it contains 11 lowercase glyphs that are identical or nearly identical to Latin counterparts.
The Cyrillic letters а, с, е, о, р, х and у have optical counterparts in the basic Latin alphabet and look close or identical to a, c, e, o, p, x and y. Cyrillic З, Ч and б resemble the numerals 3, 4 and 6. Italic type generates more homoglyphs: дтпи or дтпи (дтпи in standard type), resembling dmnu (in some fonts д can be used, since its italic form resembles a lowercase g; however, in most mainstream fonts, д instead resembles a partial differential sign, ∂).
If capital letters are counted, АВСЕНІЈКМОРЅТХ can substitute ABCEHIJKMOPSTX, in addition to the capitals for the lowercase Cyrillic homoglyphs.
Cyrillic non-Russian problematic letters are і and i, ј and j, ԛ and q, ѕ and s, ԝ and w, Ү and Y, while Ғ and F, Ԍ and G bear some resemblance to each other. Cyrillic ӓёїӧ can also be used if an IDN itself is being spoofed, to fake äëïö.
While Komi De (ԁ), shha (һ), palochka (Ӏ) and izhitsa (ѵ) bear strong resemblance to Latin d, h, l and v, these letters are either rare or archaic and are not widely supported in most standard fonts (they are not included in the WGL-4). Attempting to use them could cause a ransom note effect.
=== Greek ===
From the Greek alphabet, only omicron (ο) and sometimes nu (ν) appear identical to a Latin alphabet letter in the lowercase used for URLs. Fonts that are in italic type will feature Greek alpha (α) looking like a Latin a.
This list increases if close matches are also allowed (such as Greek εικηρτυωχγ for eiknptuwxy). Using capital letters, the list expands greatly. Greek ΑΒΕΗΙΚΜΝΟΡΤΧΥΖ looks identical to Latin ABEHIKMNOPTXYZ. Greek ΑΓΒΕΗΚΜΟΠΡΤΦΧ looks similar to Cyrillic АГВЕНКМОПРТФХ (as do Cyrillic Лл (Лл) and Greek Λ in certain geometric sans-serif fonts), Greek letters κ and ο look similar to Cyrillic к and о. Besides this Greek τ, φ can be similar to Cyrillic т, ф in some fonts, Greek δ looks like Cyrillic б, and the Cyrillic а also italicizes the same as its Latin counterpart, making it possible to substitute it for alpha or vice versa. The lunate form of sigma, Ϲϲ, resembles both Latin Cc and Cyrillic Сс. Especially in contemporary typefaces, Cyrillic л is rendered with a glyph indistinguishable from Greek π.
If an IDN itself is being spoofed, Greek beta β can be a substitute for German eszett ß in some fonts (and in fact, code page 437 treats them as equivalent), as can Greek end-of-word-variant sigma ς for ç; accented Greek substitutes όίά can usually be used for óíá in many fonts, with the last of these (alpha) again only resembling a in italic type.
=== Armenian ===
The Armenian alphabet can also contribute critical characters: several Armenian characters like օ, ո, ս, as well as capital Տ and Լ are often completely identical to Latin characters in modern fonts, and symbols which similar enough to pass off, such as ցհոօզս which look like ghnoqu, յ which resembles j (albeit dotless), and ք, which can either resemble p or f depending on the font; ա can resemble Cyrillic ш. However, the use of Armenian is, luckily, a bit less reliable: Not all standard fonts feature Armenian glyphs (whereas the Greek and Cyrillic scripts are); Windows prior to Windows 7 rendered Armenian in a distinct font, Sylfaen, of which the mixing of Armenian with Latin would appear obviously different if using a font other than Sylfaen or a Unicode typeface. (This is known as a ransom note effect.) The current version of Tahoma, used in Windows 7, supports Armenian (previous versions did not). Furthermore, this font differentiates Latin g from Armenian ց.
Two letters in Armenian (Ձշ) also can resemble the number 2, Յ resembles 3, while another (վ) sometimes resembles the number 4.
=== Hebrew ===
Hebrew spoofing is generally rare. Only three letters from that alphabet can reliably be used: samekh (ס), which sometimes resembles o, vav with diacritic (וֹ), which resembles an i, and heth (ח), which resembles the letter n. Less accurate approximants for some other alphanumerics can also be found, but these are usually only accurate enough to use for the purposes of foreign branding and not for substitution. Furthermore, the Hebrew alphabet is written from right to left and trying to mix it with left-to-right glyphs may cause problems.
=== Thai ===
Though the Thai script has historically had a distinct look with numerous loops and small flourishes, modern Thai typography, beginning with Manoptica in 1973 and continuing through IBM Plex in the modern era, has increasingly adopted a simplified style in which Thai characters are represented with glyphs strongly resembling Latin letters. ค (A), ท (n), น (u), บ (U), ป (J), พ (W), ร (S), and ล (a) are among the Thai glyphs that can closely resemble Latin.
=== Chinese ===
The Chinese language can be problematic for homographs as many characters exist as both traditional (regular script) and simplified Chinese characters. In the .org domain, registering one variant renders the other unavailable to anyone; in .biz a single Chinese-language IDN registration delivers both variants as active domains (which must have the same domain name server and the same registrant). .hk (.香港) also adopts this policy.
=== Other scripts ===
Other Unicode scripts in which homographs can be found include Number Forms (Roman numerals), CJK Compatibility and Enclosed CJK Letters and Months (certain abbreviations), Latin (certain digraphs), Currency Symbols, Mathematical Alphanumeric Symbols, and Alphabetic Presentation Forms (typographic ligatures).
=== Accented characters ===
Two names which differ only in an accent on one character may look very similar, particularly when the substitution involves the dotted letter i; the tittle (dot) on the i can be replaced with a diacritic (such as a grave accent or acute accent; both ì and í are included in most standard character sets and fonts) that can only be detected with close inspection. In most top-level domain registries, wíkipedia.tld (xn--wkipedia-c2a.tld) and wikipedia.tld are two different names which may be held by different registrants. One exception is .ca, where reserving the plain-ASCII version of the domain prevents another registrant from claiming an accented version of the same name.
=== Non-displayable characters ===
Unicode includes many characters which are not displayed by default, such as the zero-width space. In general, ICANN prohibits any domain with these characters from being registered, regardless of TLD.
=== Known homograph attacks ===
In 2011, an unknown source (registering under the name "Completely Anonymous") registered a domain name homographic to television station KBOI-TV's to create a fake news website. The sole purpose of the site was to spread an April Fool's Day joke regarding the Governor of Idaho issuing a supposed ban on the sale of music by Justin Bieber.
In September 2017, security researcher Ankit Anubhav discovered an IDN homograph attack where the attackers registered adoḅe.com to deliver the Betabot trojan.
== Defending against the attack ==
=== Client-side mitigation ===
The simplest defense is for web browsers not to support IDNA or other similar mechanisms, or for users to turn off whatever support their browsers have. That could mean blocking access to IDNA sites, but generally browsers permit access and just display IDNs in Punycode. Either way, this amounts to abandoning non-ASCII domain names.
Mozilla Firefox versions 22 and later display IDNs if either the TLD prevents homograph attacks by restricting which characters can be used in domain names or labels do not mix scripts for different languages. Otherwise, IDNs are displayed in Punycode.
Google Chrome versions 51 and later use an algorithm similar to the one used by Firefox. Previous versions display an IDN only if all of its characters belong to one (and only one) of the user's preferred languages. Chromium and Chromium-based browsers such as Microsoft Edge (since 2020) and Opera also use the same algorithm.
Safari's approach is to render problematic character sets as Punycode. This can be changed by altering the settings in Mac OS X's system files.
Internet Explorer versions 7 and later allow IDNs except for labels that mix scripts for different languages. Labels that mix scripts are displayed in Punycode. There are exceptions to locales where ASCII characters are commonly mixed with localized scripts. Internet Explorer 7 was capable of using IDNs, but it imposes restrictions on displaying non-ASCII domain names based on a user-defined list of allowed languages and provides an anti-phishing filter that checks suspicious websites against a remote database of known phishing sites.
Microsoft Edge Legacy converts all Unicode into Punycode.
As an additional defense, Internet Explorer 7, Firefox 2.0 and above, and Opera 9.10 include phishing filters that attempt to alert users when they visit malicious websites. As of April 2017, several browsers (including Chrome, Firefox, and Opera) were displaying IDNs consisting purely of Cyrillic characters normally (not as punycode), allowing spoofing attacks. Chrome tightened IDN restrictions in version 59 to prevent this attack.
These methods of defense only extend to within a browser. Homographic URLs that house malicious software can still be distributed, without being displayed as Punycode, through e-mail, social networking or other websites without being detected until the user actually clicks the link. While the fake link will show in Punycode when it is clicked, by this point the page has already begun loading into the browser.
=== Server-side/registry operator mitigation ===
The IDN homographs database is a Python library that allows developers to defend against this using machine learning-based character recognition.
ICANN has implemented a policy prohibiting any potential internationalized TLD from choosing letters that could resemble an existing Latin TLD and thus be used for homograph attacks. Proposed IDN TLDs .бг (Bulgaria), .укр (Ukraine) and .ελ (Greece) have been rejected or stalled because of their perceived resemblance to Latin letters. All three (and Serbian .срб and Mongolian .мон) have later been accepted. Three-letter TLD are considered safer than two-letter TLD, since they are harder to match to normal Latin ISO-3166 country domains; although the potential to match new generic domains remains, such generic domains are far more expensive than registering a second- or third-level domain address, making it cost-prohibitive to try to register a homoglyphic TLD for the sole purpose of making fraudulent domains (which itself would draw ICANN scrutiny).
The Russian registry operator Coordination Center for TLD RU only accepts Cyrillic names for the top-level domain .рф, forbidding a mix with Latin or Greek characters. However, the problem in .com and other gTLDs remains open.
=== Research based mitigations ===
In their 2019 study, Suzuki et al. introduced ShamFinder, a program for recognizing IDNs, shedding light on their prevalence in real-world scenarios. Similarly, Chiba et al. (2019) designed DomainScouter, a system adept at detecting diverse homograph IDNs in domains through analyzing an estimated 4.4 million registered IDNs across 570 Top-Level Domains (TLDs) it was able to successfully identify 8,284 IDN homographs, including many previously unidentified cases targeting brands in languages other than English.
== See also ==
Security issues in Unicode
Internationalized domain name
Homoglyph
Faux Cyrillic
Metal umlaut
Duplicate characters in Unicode
Unicode equivalence
Typosquatting
Leet
Gyaru-moji
Yaminjeongeum
Martian language
== Notes ==
== References == | Wikipedia/IDN_homograph_attack |
The Gordon–Loeb model is an economic model that analyzes the optimal level of investment in information security.
The benefits of investing in cybersecurity stem from reducing the costs associated with cyber breaches. The Gordon-Loeb model provides a framework for determining how much to invest in cybersecurity, using a cost-benefit approach.
The model includes the following key components:
Organizational data vulnerable to cyber-attacks, with vulnerability denoted by v (0 ≤ v ≤ 1), representing the probability of a breach occurring under current conditions.
The potential loss from a breach, represented by L, which can be expressed in monetary terms. The expected loss is calculated as vL before additional cybersecurity investments.
Investment in cybersecurity, denoted as z, reduces v based on the effectiveness of the security measures, known as the security breach probability function.
Gordon and Loeb demonstrated that the optimal level of security investment, z*, does not exceed 37% of the expected loss from a breach. Specifically, z* (v) ≤ (1/e) vL.
== Overview ==
The model was first introduced by Lawrence A. Gordon and Martin P. Loeb in a 2002 paper published in ACM Transactions on Information and System Security, titled "The Economics of Information Security Investment". It was reprinted in the 2004 book Economics of Information Security. Both authors are professors at the University of Maryland's Robert H. Smith School of Business.
The model is widely regarded as one of the leading analytical tools in cybersecurity economics. It has been extensively referenced in academic and industry literature. It has also been tested in various contexts by researchers such as Marc Lelarge and Yuliy Baryshnikov.
The model has also been covered by mainstream media, including The Wall Street Journal and The Financial Times.
Subsequent research has critiqued the model's assumptions, suggesting that some security breach functions may require fixing no less than 1/2 the expected loss, challenging the universality of the 1/e factor. Alternative formulations even propose that some loss functions may justify investment at the full estimated loss.
== See also ==
Genuine progress indicator
== References == | Wikipedia/Gordon–Loeb_model |
Security controls or security measures are safeguards or countermeasures to avoid, detect, counteract, or minimize security risks to physical property, information, computer systems, or other assets. In the field of information security, such controls protect the confidentiality, integrity and availability of information.
Systems of controls can be referred to as frameworks or standards. Frameworks can enable an organization to manage security controls across different types of assets with consistency.
== Types of security controls ==
Security controls can be classified by various criteria. For example, controls can be classified by how/when/where they act relative to a security breach (sometimes termed control types):
Preventive controls are intended to prevent an incident from occurring e.g. by locking out unauthorized intruders;
Detective controls are intended to identify, characterize, and log an incident e.g. isolating suspicious behavior from a malicious actor on a network;
Compensating controls mitigate ongoing damages of an active incident, e.g. shutting down a system upon detecting malware.
After the event, corrective controls are intended to restore damage caused by the incident e.g. by recovering the organization to normal working status as efficiently as possible.
Security controls can also be classified according to the implementation of the control (sometimes termed control categories), for example:
Physical controls - e.g. fences, doors, locks and fire extinguishers;
Procedural or administrative controls - e.g. incident response processes, management oversight, security awareness and training;
Technical or logical controls - e.g. user authentication (login) and logical access controls, antivirus software, firewalls;
Legal and regulatory or compliance controls - e.g. privacy laws, policies and clauses.
== Information security standards and control frameworks ==
Numerous information security standards promote good security practices and define frameworks or systems to structure the analysis and design for managing information security controls. Some of the most well known standards are outlined below.
=== International Standards Organization ===
ISO/IEC 27001:2022 was released in October 2022. All organizations certified to ISO 27001:2013 are obliged to transition to the new version of the Standard within 3 years (by October 2025).
The 2022 version of the Standard specifies 93 controls in 4 groups:
A.5: Organisational controls
A.6: People controls
A.7: Physical controls
A.8: Technological controls
It groups these controls into operational capabilities as follows:
Governance
Asset management
Information protection
Human resource security
Physical security
System and network security
Application security
Secure configuration
Identity and access management
Threat and vulnerability management
Continuity
Supplier relationships security
Legal and compliance
Information security event management; and
Information_security_assurance
The previous version of the Standard, ISO/IEC 27001, specified 114 controls in 14 groups:
A.5: Information security policies
A.6: How information security is organised
A.7: Human resources security - controls that are applied before, during, or after employment.
A.8: Asset management
A.9: Access controls and managing user access
A.10: Cryptographic technology
A.11: Physical security of the organisation's sites and equipment
A.12: Operational security
A.13: Secure communications and data transfer
A.14: Secure acquisition, development, and support of information systems
A.15: Security for suppliers and third parties
A.16: Incident management
A.17: Business continuity/disaster recovery (to the extent that it affects information security)
A.18: Compliance - with internal requirements, such as policies, and with external requirements, such as laws.
=== U.S. Federal Government information security standards ===
The Federal Information Processing Standards (FIPS) apply to all US government agencies. However, certain national security systems, under the purview of the Committee on National Security Systems, are managed outside these standards.
Federal information Processing Standard 200 (FIPS 200), "Minimum Security Requirements for Federal Information and Information Systems," specifies the minimum security controls for federal information systems and the processes by which risk-based selection of security controls occurs. The catalog of minimum security controls is found in NIST Special Publication SP 800-53.
FIPS 200 identifies 17 broad control families:
AC Access Control
AT Awareness and Training
AU Audit and Accountability
CA Security Assessment and Authorization (historical abbreviation)
CM Configuration Management
CP Contingency Planning
IA Identification and Authentication
IR Incident Response
MA Maintenance
MP Media Protection
PE Physical and Environmental Protection
PL Planning
PS Personnel Security
RA Risk Assessment
SA System and Services Acquisition
SC System and Communications Protection
SI System and Information Integrity
National Institute of Standards and Technology
==== NIST Cybersecurity Framework ====
A maturity based framework divided into five functional areas and approximately 100 individual controls in its "core."
==== NIST SP-800-53 ====
A database of nearly one thousand technical controls grouped into families and cross references.
Starting with Revision 3 of 800-53, Program Management controls were identified. These controls are independent of the system controls, but are necessary for an effective security program.
Starting with Revision 4 of 800-53, eight families of privacy controls were identified to align the security controls with the privacy expectations of federal law.
Starting with Revision 5 of 800-53, the controls also address data privacy as defined by the NIST Data Privacy Framework.
=== Commercial Control Sets ===
==== COBIT5 ====
A proprietary control set published by ISACA.
Governance of Enterprise IT
Evaluate, Direct and Monitor (EDM) – 5 processes
Management of Enterprise IT
Align, Plan and Organise (APO) – 13 processes
Build, Acquire and Implement (BAI) – 10 processes
Deliver, Service and Support (DSS) – 6 processes
Monitor, Evaluate and Assess (MEA) - 3 processes
==== CIS Controls (CIS 18) ====
Formerly known as the SANS Critical Security Controls now officially called the CIS Critical Security Controls (COS Controls). The CIS Controls are divided into 18 controls.
CIS Control 1: Inventory and Control of Enterprise Assets
CIS Control 2: Inventory and Control of Software Assets
CIS Control 3: Data Protection
CIS Control 4: Secure Configuration of Enterprise Assets and Software
CIS Control 5: Account Management
CIS Control 6: Access Control Management
CIS Control 7: Continuous Vulnerability Management
CIS Control 8: Audit Log Management
CIS Control 9: Email and Web Browser Protections
CIS Control 10: Malware Defenses
CIS Control 11: Data Recovery
CIS Control 12: Network Infrastructure Management
CIS Control 13: Network Monitoring and Defense
CIS Control 14: Security Awareness and Skills Training
CIS Control 15: Service Provider Management
CIS Control 16: Application Software Security
CIS Control 17: Incident Response Management
CIS Control 18: Penetration Testing
The Controls are divided further into Implementation Groups (IGs) which are a recommended guidance to prioritize implementation of the CIS controls.
== Telecommunications ==
In telecommunications, security controls are defined as security services as part of the OSI model:
ITU-T X.800 Recommendation.
ISO ISO 7498-2
These are technically aligned. This model is widely recognized.
== Data liability (legal, regulatory, compliance) ==
The intersection of security risk and laws that set standards of care is where data liability are defined. A handful of databases are emerging to help risk managers research laws that define liability at the country, province/state, and local levels. In these control sets, compliance with relevant laws are the actual risk mitigators.
Perkins Coie Security Breach Notification Chart: A set of articles (one per state) that define data breach notification requirements among US states.
NCSL Security Breach Notification Laws: A list of US state statutes that define data breach notification requirements.
ts jurisdiction: A commercial cybersecurity research platform with coverage of 380+ US State & Federal laws that impact cybersecurity before and after a breach. ts jurisdiction also maps to the NIST Cybersecurity Framework.
== Business control frameworks ==
There are a wide range of frameworks and standards looking at internal business, and inter-business controls, including:
SSAE 16
ISAE 3402
Payment Card Industry Data Security Standard
Health Insurance Portability and Accountability Act
COBIT 4/5
CIS Top-20
NIST Cybersecurity Framework
== See also ==
Access control
Aviation security
Countermeasure
Defense in depth
Environmental design
Information security
Physical Security
Risk
Security
Security engineering
Security management
Security services
Gordon–Loeb model for cyber security investments
== References ==
== External links ==
NIST SP 800-53 Revision 4
DoD Instruction 8500.2
FISMApedia Terms | Wikipedia/Security_controls |
In computer security, general access control includes identification, authorization, authentication, access approval, and audit. A more narrow definition of access control would cover only access approval, whereby the system makes a decision to grant or reject an access request from an already authenticated subject, based on what the subject is authorized to access. Authentication and access control are often combined into a single operation, so that access is approved based on successful authentication, or based on an anonymous access token. Authentication methods and tokens include passwords, biometric scans, physical keys, electronic keys and devices, hidden paths, social barriers, and monitoring by humans and automated systems.
== Software entities ==
In any access-control model, the entities that can perform actions on the system are called subjects, and the entities representing resources to which access may need to be controlled are called objects (see also Access Control Matrix). Subjects and objects should both be considered as software entities, rather than as human users: any human users can only have an effect on the system via the software entities that they control.
Although some systems equate subjects with user IDs, so that all processes started by a user by default have the same authority, this level of control is not fine-grained enough to satisfy the principle of least privilege, and arguably is responsible for the prevalence of malware in such systems (see computer insecurity).
In some models, for example the object-capability model, any software entity can potentially act as both subject and object.
As of 2014, access-control models tend to fall into one of two classes: those based on capabilities and those based on access control lists (ACLs).
In a capability-based model, holding an unforged-able reference or capability to an object, that provides access to the object (roughly analogous to how possession of one's house key grants one access to one's house); access is conveyed to another party by transmitting such a capability over a secure channel.
In an ACL-based model, a subject's access to an object or group of objects depends on whether its identity appears on a list associated with the object (roughly analogous to how a bouncer at a private party would check an ID to see if a name appears on the guest list); access is conveyed by editing the list. (Different ACL systems have a variety of different conventions regarding who or what is responsible for editing the list and how it is edited.)
Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members of a group of subjects (often the group is itself modeled as a subject).
== Services ==
Access control systems provide the essential services of authorization, identification and authentication (I&A), access approval, and accountability where:
authorization specifies what a subject can do
identification and authentication ensure that only legitimate subjects can log on to a system
access approval grants access during operations, by association of users with the resources that they are allowed to access, based on the authorization policy
accountability identifies what a subject (or all subjects associated with a user) did
== Authorization ==
Authorization involves the act of defining access-rights for subjects. An authorization policy specifies the operations that subjects are allowed to execute within a system.
Most modern operating systems implement authorization policies as formal sets of permissions that are variations or extensions of three basic types of access:
Read (R): The subject can:
Read file contents
List directory contents
Write (W): The subject can change the contents of a file or directory with the following tasks:
Add
Update
Delete
Rename
Execute (X): If the file is a program, the subject can cause the program to be run. (In Unix-style systems, the "execute" permission doubles as a "traverse directory" permission when granted for a directory.)
These rights and permissions are implemented differently in systems based on discretionary access control (DAC) and mandatory access control (MAC).
== Identification and authentication ==
Identification and authentication (I&A) is the process of verifying that an identity is bound to the entity that makes an assertion or claim of identity. The I&A process assumes that there was an initial validation of the identity, commonly called identity proofing. Various methods of identity proofing are available, ranging from in-person validation using government issued identification, to anonymous methods that allow the claimant to remain anonymous, but known to the system if they return. The method used for identity proofing and validation should provide an assurance level commensurate with the intended use of the identity within the system. Subsequently, the entity asserts an identity together with an authenticator as a means for validation. The only requirements for the identifier is that it must be unique within its security domain.
Authenticators are commonly based on at least one of the following four factors:
Something you know, such as a password or a personal identification number (PIN). This assumes that only the owner of the account knows the password or PIN needed to access the account.
Something you have, such as a smart card or security token. This assumes that only the owner of the account has the necessary smart card or token needed to unlock the account.
Something you are, such as fingerprint, voice, retina, or iris characteristics.
Where you are, for example inside or outside a company firewall, or proximity of login location to a personal GPS device.
== Access approval ==
Access approval is the function that actually grants or rejects access during operations.
During access approval, the system compares the formal representation of the authorization policy with the access request, to determine whether the request shall be granted or rejected. Moreover, the access evaluation can be done online/ongoing.
== Accountability ==
Accountability uses such system components as audit trails (records) and logs, to associate a subject with its actions. The information recorded should be sufficient to map the subject to a controlling user. Audit trails and logs are important for
Detecting security violations
Re-creating security incidents
If no one is regularly reviewing your logs and they are not maintained in a secure and consistent manner, they may not be admissible as evidence.
Many systems can generate automated reports, based on certain predefined criteria or thresholds, known as clipping levels. For example, a clipping level may be set to generate a report for the following:
More than three failed logon attempts in a given period
Any attempt to use a disabled user account
These reports help a system administrator or security administrator to more easily identify possible break-in attempts.
–
Definition of clipping level: a disk's ability to maintain its magnetic properties and hold its content. A high-quality level range is 65–70%; low quality is below 55%.
== Access controls ==
Access control models are sometimes categorized as either discretionary or non-discretionary. The three most widely recognized models are Discretionary Access Control (DAC), Mandatory Access Control (MAC), and Role Based Access Control (RBAC). MAC is non-discretionary.
=== Discretionary access control ===
Discretionary access control (DAC) is a policy determined by the owner of an object. The owner decides who is allowed to access the object, and what privileges they have.
Two important concepts in DAC are
File and data ownership: Every object in the system has an owner. In most DAC systems, each object's initial owner is the subject that caused it to be created. The access policy for an object is determined by its owner.
Access rights and permissions: These are the controls that an owner can assign to other subjects for specific resources.
Access controls may be discretionary in ACL-based or capability-based access control systems. (In capability-based systems, there is usually no explicit concept of 'owner', but the creator of an object has a similar degree of control over its access policy.)
=== Mandatory access control ===
Mandatory access control refers to allowing access to a resource if and only if rules exist that allow a given user to access the resource. It is difficult to manage, but its use is usually justified when used to protect highly sensitive information. Examples include certain government and military information. Management is often simplified (over what is required) if the information can be protected using hierarchical access control, or by implementing sensitivity labels. What makes the method "mandatory" is the use of either rules or sensitivity labels.
Sensitivity labels: In such a system subjects and objects must have labels assigned to them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the level of trust required for access. In order to access a given object, the subject must have a sensitivity level equal to or higher than the requested object.
Data import and export: Controlling the import of information from other systems and export to other systems (including printers) is a critical function of these systems, which must ensure that sensitivity labels are properly maintained and implemented so that sensitive information is appropriately protected at all times.
Two methods are commonly used for applying mandatory access control:
Rule-based (or label-based) access control: This type of control further defines specific conditions for access to a requested object. A Mandatory Access Control system implements a simple form of rule-based access control to determine whether access should be granted or denied by matching:
An object's sensitivity label
A subject's sensitivity label
Lattice-based access control: These can be used for complex access control decisions involving multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest lower-bound and least upper-bound values for a pair of elements, such as a subject and an object.
Few systems implement MAC; XTS-400 and SELinux are examples of systems that do.
=== Role-based access control ===
Role-based access control (RBAC) is an access policy determined by the system, not by the owner. RBAC is used in commercial applications and also in military systems, where multi-level security requirements may also exist. RBAC differs from DAC in that DAC allows users to control access to their resources, while in RBAC, access is controlled at the system level, outside of the user's control. Although RBAC is non-discretionary, it can be distinguished from MAC primarily in the way permissions are handled. MAC controls read and write permissions based on a user's clearance level and additional labels. RBAC controls collections of permissions that may include complex operations such as an e-commerce transaction, or may be as simple as read or write. A role in RBAC can be viewed as a set of permissions.
Three primary rules are defined for RBAC:
Role assignment: A subject can execute a transaction only if the subject has selected or been assigned a suitable role.
Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized.
Transaction authorization: A subject can execute a transaction only if the transaction is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can execute only transactions for which they are authorized.
Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by lower-level sub-roles.
Most IT vendors offer RBAC in one or more products.
=== Attribute-based access control ===
In attribute-based access control (ABAC), access is granted not based on the rights of the subject associated with a user after authentication, but based on the attributes of the subject, object, requested operations, and environment conditions against policy, rules, or relationships that describe the allowable operations for a given set of attributes. The user has to prove so-called claims about his or her attributes to the access control engine. An attribute-based access control policy specifies which claims need to be satisfied in order to grant access to an object. For instance the claim could be "older than 18". Any user that can prove this claim is granted access. Users can be anonymous when authentication and identification are not strictly required. One does, however, require means for proving claims anonymously. This can for instance be achieved using anonymous credentials. XACML (extensible access control markup language) is a standard for attribute-based access control. XACML 3.0 was standardized in January 2013.
=== Break-Glass Access Control Models ===
Traditionally, access has the purpose of restricting access, thus most access control models follow the "default deny principle", i.e. if a specific access request is not explicitly allowed, it will be denied. This behavior might conflict with the regular operations of a system. In certain situations, humans are willing to take the risk that might be involved in violating an access control policy, if the potential benefit that can be achieved outweighs this risk. This need is especially visible in the health-care domain, where a denied access to patient records can cause the death of a patient. Break-Glass (also called break-the-glass) try to mitigate this by allowing users to override access control decision. Break-Glass can either be implemented in an access control specific manner (e.g. into RBAC), or generic (i.e., independent from the underlying access control model).
=== Host-based access control (HBAC) ===
The initialism HBAC stands for "host-based access control".
== See also ==
Resource Access Control Facility
== References == | Wikipedia/Computer_access_control |
A network administrator is a person designated in an organization whose responsibility includes maintaining computer infrastructures with emphasis on local area networks (LANs) up to wide area networks (WANs). Responsibilities may vary between organizations, but installing new hardware, on-site servers, enforcing licensing agreements, software-network interactions as well as network integrity and resilience are some of the key areas of focus.
== Duties ==
The role of the network administrator can vary significantly depending on an organization's size, location, and socioeconomic considerations. Some organizations work on a user-to-technical support ratio,
Network administrators are often involved in proactive work. This type of work will often include:
Designing network infrastructure
Implementing and configuring network hardware and software
Network monitoring and maintaining the network
Testing network for vulnerability & weakness
Providing technical support
Managing network resources
Managing network documentation
Managing vendor relationships
Staying up to date with new technologies and best practices
Providing training and guidance to other team members
Network administrators are responsible for making sure that computer hardware and network infrastructure related to an organization's data network are effectively maintained. In smaller organizations, they are typically involved in the procurement of new hardware, the rollout of new software, maintaining disk images for new computer installs, making sure that licenses are paid for and up to date for software that needs it, maintaining the standards for server installations and applications, monitoring the performance of the network, checking for security breaches, and poor data management practices. A common question for the small-medium business (SMB) network administrator is, how much bandwidth do I need to run my business? Typically, within a larger organization, these roles are split into multiple roles or functions across various divisions and are not actioned by the one individual. In other organizations, some of these roles mentioned are carried out by system administrators.
As with many technical roles, network administrator positions require a breadth of technical knowledge and the ability to learn the intricacies of new networking and server software packages quickly. Within smaller organizations, the more senior role of network engineer is sometimes attached to the responsibilities of the network administrator. It is common for smaller organizations to outsource this function.
== See also ==
Network analyzer (disambiguation)
Network architecture
Network management system
System administrator
Technical support
== References == | Wikipedia/Network_administrator |
The Erdős–Turán conjecture is an old unsolved problem in additive number theory (not to be confused with Erdős conjecture on arithmetic progressions) posed by Paul Erdős and Pál Turán in 1941.
It concerns additive bases, subsets of natural numbers with the property that every natural number can be represented as the sum of a bounded number of elements from the basis. Roughly, it states that the number of representations of this type cannot also be bounded.
== Background and formulation ==
The question concerns subsets of the natural numbers, typically denoted by
N
{\displaystyle \mathbb {N} }
, called additive bases. A subset
B
{\displaystyle B}
is called an (asymptotic) additive basis of finite order if there is some positive integer
h
{\displaystyle h}
such that every sufficiently large natural number
n
{\displaystyle n}
can be written as the sum of at most
h
{\displaystyle h}
elements of
B
{\displaystyle B}
. For example, the natural numbers are themselves an additive basis of order 1, since every natural number is trivially a sum of at most one natural number. Lagrange's four-square theorem says that the set of positive square numbers is an additive basis of order 4. Another highly non-trivial and celebrated result along these lines is Vinogradov's theorem.
One is naturally inclined to ask whether these results are optimal. It turns out that Lagrange's four-square theorem cannot be improved, as there are infinitely many positive integers which are not the sum of three squares. This is because no positive integer which is the sum of three squares can leave a remainder of 7 when divided by 8. However, one should perhaps expect that a set
B
{\displaystyle B}
which is about as sparse as the squares (meaning that in a given interval
[
1
,
N
]
{\displaystyle [1,N]}
, roughly
N
1
/
2
{\displaystyle N^{1/2}}
of the integers in
[
1
,
N
]
{\displaystyle [1,N]}
lie in
B
{\displaystyle B}
) which does not have this obvious deficit should have the property that every sufficiently large positive integer is the sum of three elements from
B
{\displaystyle B}
. This follows from the following probabilistic model: suppose that
N
/
2
<
n
≤
N
{\displaystyle N/2<n\leq N}
is a positive integer, and
x
1
,
x
2
,
x
3
{\displaystyle x_{1},x_{2},x_{3}}
are 'randomly' selected from
B
∩
[
1
,
N
]
{\displaystyle B\cap [1,N]}
. Then the probability of a given element from
B
{\displaystyle B}
being chosen is roughly
1
/
N
1
/
2
{\displaystyle 1/N^{1/2}}
. One can then estimate the expected value, which in this case will be quite large. Thus, we 'expect' that there are many representations of
n
{\displaystyle n}
as a sum of three elements from
B
{\displaystyle B}
, unless there is some arithmetic obstruction (which means that
B
{\displaystyle B}
is somehow quite different than a 'typical' set of the same density), like with the squares. Therefore, one should expect that the squares are quite inefficient at representing positive integers as the sum of four elements, since there should already be lots of representations as sums of three elements for those positive integers
n
{\displaystyle n}
that passed the arithmetic obstruction. Examining Vinogradov's theorem quickly reveals that the primes are also very inefficient at representing positive integers as the sum of four primes, for instance.
This begets the question: suppose that
B
{\displaystyle B}
, unlike the squares or the prime numbers, is very efficient at representing positive integers as a sum of
h
{\displaystyle h}
elements of
B
{\displaystyle B}
. How efficient can it be? The best possibility is that we can find a positive integer
h
{\displaystyle h}
and a set
B
{\displaystyle B}
such that every positive integer
n
{\displaystyle n}
is the sum of at most
h
{\displaystyle h}
elements of
B
{\displaystyle B}
in exactly one way. Failing that, perhaps we can find a
B
{\displaystyle B}
such that every positive integer
n
{\displaystyle n}
is the sum of at most
h
{\displaystyle h}
elements of
B
{\displaystyle B}
in at least one way and at most
S
(
h
)
{\displaystyle S(h)}
ways, where
S
{\displaystyle S}
is a function of
h
{\displaystyle h}
.
This is basically the question that Paul Erdős and Pál Turán asked in 1941. Indeed, they conjectured a negative answer to this question, namely that if
B
{\displaystyle B}
is an additive basis of order
h
{\displaystyle h}
of the natural numbers, then it cannot represent positive integers as a sum of at most
h
{\displaystyle h}
too efficiently; the number of representations of
n
{\displaystyle n}
, as a function of
n
{\displaystyle n}
, must tend to infinity.
== History ==
The conjecture was made jointly by Paul Erdős and Pál Turán in 1941. In the original paper, they write
"(2) If
f
(
n
)
>
0
{\displaystyle f(n)>0}
for
n
>
n
0
{\displaystyle n>n_{0}}
, then
lim
¯
n
→
∞
f
(
n
)
=
∞
{\displaystyle \varlimsup _{n\rightarrow \infty }f(n)=\infty }
",
where
lim
¯
n
→
∞
{\displaystyle \varlimsup _{n\rightarrow \infty }}
denotes the limit superior. Here
f
(
n
)
{\displaystyle f(n)}
is the number of ways one can write the natural number
n
{\displaystyle n}
as the sum of two (not necessarily distinct) elements of
B
{\displaystyle B}
. If
f
(
n
)
{\displaystyle f(n)}
is always positive for sufficiently large
n
{\displaystyle n}
, then
B
{\displaystyle B}
is called an additive basis (of order 2). This problem has attracted significant attention but remains unsolved.
In 1964, Erdős published a multiplicative version of this conjecture.
== Progress ==
While the conjecture remains unsolved, there have been some advances on the problem. First, we express the problem in modern language. For a given subset
B
⊂
N
{\displaystyle B\subset \mathbb {N} }
, we define its representation function
r
B
(
n
)
=
#
{
(
a
1
,
a
2
)
∈
B
2
∣
a
1
+
a
2
=
n
}
{\displaystyle r_{B}(n)=\#\{(a_{1},a_{2})\in B^{2}\mid a_{1}+a_{2}=n\}}
. Then the conjecture states that if
r
B
(
n
)
>
0
{\displaystyle r_{B}(n)>0}
for all
n
{\displaystyle n}
sufficiently large, then
lim sup
n
→
∞
r
B
(
n
)
=
∞
{\displaystyle \limsup _{n\rightarrow \infty }r_{B}(n)=\infty }
.
More generally, for any
h
∈
N
{\displaystyle h\in \mathbb {N} }
and subset
B
⊂
N
{\displaystyle B\subset \mathbb {N} }
, we can define the
h
{\displaystyle h}
representation function as
r
B
,
h
(
n
)
=
#
{
(
a
1
,
⋯
,
a
h
)
∈
B
h
∣
a
1
+
⋯
+
a
h
=
n
}
{\displaystyle r_{B,h}(n)=\#\{(a_{1},\cdots ,a_{h})\in B^{h}\mid a_{1}+\cdots +a_{h}=n\}}
. We say that
B
{\displaystyle B}
is an additive basis of order
h
{\displaystyle h}
if
r
B
,
h
(
n
)
>
0
{\displaystyle r_{B,h}(n)>0}
for all
n
{\displaystyle n}
sufficiently large. One can see from an elementary argument that if
B
{\displaystyle B}
is an additive basis of order
h
{\displaystyle h}
, then
n
≤
∑
m
=
1
n
r
B
,
h
(
m
)
≤
|
B
∩
[
1
,
n
]
|
h
{\displaystyle \displaystyle n\leq \sum _{m=1}^{n}r_{B,h}(m)\leq |B\cap [1,n]|^{h}}
So we obtain the lower bound
n
1
/
h
≤
|
B
∩
[
1
,
n
]
|
{\displaystyle n^{1/h}\leq |B\cap [1,n]|}
.
The original conjecture spawned as Erdős and Turán sought a partial answer to Sidon's problem (see: Sidon sequence). Later, Erdős set out to answer the following question posed by Sidon: how close to the lower bound
|
B
∩
[
1
,
n
]
|
≥
n
1
/
h
{\displaystyle |B\cap [1,n]|\geq n^{1/h}}
can an additive basis
B
{\displaystyle B}
of order
h
{\displaystyle h}
get? This question was answered in the case
h
=
2
{\displaystyle h=2}
by Erdős in 1956. Erdős proved that there exists an additive basis
B
{\displaystyle B}
of order 2 and constants
c
1
,
c
2
>
0
{\displaystyle c_{1},c_{2}>0}
such that
c
1
log
n
≤
r
B
(
n
)
≤
c
2
log
n
{\displaystyle c_{1}\log n\leq r_{B}(n)\leq c_{2}\log n}
for all
n
{\displaystyle n}
sufficiently large. In particular, this implies that there exists an additive basis
B
{\displaystyle B}
such that
r
B
(
n
)
=
n
1
/
2
+
o
(
1
)
{\displaystyle r_{B}(n)=n^{1/2+o(1)}}
, which is essentially best possible. This motivated Erdős to make the following conjecture:
If
B
{\displaystyle B}
is an additive basis of order
h
{\displaystyle h}
, then
lim sup
n
→
∞
r
B
(
n
)
/
log
n
>
0.
{\displaystyle \limsup _{n\rightarrow \infty }r_{B}(n)/\log n>0.}
In 1986, Eduard Wirsing proved that a large class of additive bases, including the prime numbers, contains a subset that is an additive basis but significantly thinner than the original. In 1990, Erdős and Prasad V. Tetali extended Erdős's 1956 result to bases of arbitrary order. In 2000, V. Vu proved that thin subbases exist in the Waring bases using the Hardy–Littlewood circle method and his polynomial concentration results. In 2006, Borwein, Choi, and Chu proved that for all additive bases
B
{\displaystyle B}
,
f
(
n
)
{\displaystyle f(n)}
eventually exceeds 7.
== References == | Wikipedia/Erdős–Turán_conjecture_on_additive_bases |
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem.
== History ==
The initial idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and as of 2022 the method still yields results. The method is the subject of a monograph Vaughan (1997) by R. C. Vaughan.
== Outline ==
The goal is to prove asymptotic behavior of a series: to show that an ~ F(n) for some function. This is done by taking the generating function of the series, then computing the residues about zero (essentially the Fourier coefficients). Technically, the generating function is scaled to have radius of convergence 1, so it has singularities on the unit circle – thus one cannot take the contour integral over the unit circle.
The circle method is specifically how to compute these residues, by partitioning the circle into minor arcs (the bulk of the circle) and major arcs (small arcs containing the most significant singularities), and then bounding the behavior on the minor arcs. The key insight is that, in many cases of interest (such as theta functions), the singularities occur at the roots of unity, and the significance of the singularities is in the order of the Farey sequence. Thus one can investigate the most significant singularities, and, if fortunate, compute the integrals.
=== Setup ===
The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers an for n = 0, 1, 2, 3, ..., we want some asymptotic information of the type an ~ F(n), where we have some heuristic reason to guess the form taken by F (an ansatz), we write
f
(
z
)
=
∑
a
n
z
n
{\displaystyle f(z)=\sum a_{n}z^{n}}
a power series generating function. The interesting cases are where f is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation.
=== Residues ===
From that formulation, it follows directly from the residue theorem that
I
n
=
∮
C
f
(
z
)
z
−
(
n
+
1
)
d
z
=
2
π
i
a
n
{\displaystyle I_{n}=\oint _{C}f(z)z^{-(n+1)}\,dz=2\pi ia_{n}}
for integers n ≥ 0, where C is a circle of radius r and centred at 0, for any r with 0 < r < 1; in other words,
I
n
{\displaystyle I_{n}}
is a contour integral, integrated over the circle described traversed once anticlockwise. We would like to take r = 1 directly, that is, to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of f may not be defined there.
=== Singularities on unit circle ===
The problem addressed by the circle method is to force the issue of taking r = 1, by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey sequence of rational numbers, or equivalently by the roots of unity:
ζ
=
exp
(
2
π
i
r
s
)
.
{\displaystyle \zeta \ =\exp \left({\frac {2\pi ir}{s}}\right).}
Here the denominator s, assuming that r/s is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical f near ζ.
=== Method ===
The Hardy–Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of In, as r → 1, should be treated in two ways, traditionally called major arcs and minor arcs. We divide the roots of unity ζ into two classes, according to whether s ≤ N or s > N, where N is a function of n that is ours to choose conveniently. The integral In is divided up into integrals each on some arc of the circle that is adjacent to ζ, of length a function of s (again, at our discretion). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up 2πiF(n) (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than F(n).
== Discussion ==
Stated boldly like this, it is not at all clear that this can be made to work. The insights involved are quite deep. One clear source is the theory of theta functions.
=== Waring's problem ===
In the context of Waring's problem, powers of theta functions are the generating functions for the sum of squares function. Their analytic behaviour is known in much more accurate detail than for the cubes, for example.
It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at z = 1; followed by z = −1, and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity i and −i that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity.
In the case of Waring's problem, one takes a sufficiently high power of the generating function to force the situation in which the singularities, organised into the so-called singular series, predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the case of the partition function, which signalled the possibility that in a favourable situation the losses from estimates could be controlled.
=== Vinogradov trigonometric sums ===
Later, I. M. Vinogradov extended the technique, replacing the exponential sum formulation f(z) with a finite Fourier series, so that the relevant integral In is a Fourier coefficient. Vinogradov applied finite sums to Waring's problem in 1926, and the general trigonometric sum method became known as "the circle method of Hardy, Littlewood and Ramanujan, in the form of Vinogradov's trigonometric sums". Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of r in the limiting operation to be set directly to the value 1.
== Applications ==
Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables k is large relative to the degree d (see Birch's theorem for example). This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If d is fixed and k is small, other methods are required, and indeed the Hasse principle tends to fail.
== Rademacher's contour ==
In the special case when the circle method is applied to find the coefficients of a modular form of negative weight, Hans Rademacher found a modification of the contour that makes the series arising from the circle method converge to the exact result. To describe his contour, it is convenient to replace the unit circle by the upper half plane, by making the substitution z = exp(2πiτ), so that the contour integral becomes an integral from τ = i to τ = 1 + i. (The number i could be replaced by any number on the upper half-plane, but i is the most convenient choice.) Rademacher's contour is (more or less) given by the boundaries of all the Ford circles from 0 to 1, as shown in the diagram. The replacement of the line from i to 1 + i by the boundaries of these circles is a non-trivial limiting process, which can be justified for modular forms that have negative weight, and with more care can also be justified for non-constant terms for the case of weight 0 (in other words modular functions).
== Notes ==
== References ==
Apostol, Tom M. (1990), Modular functions and Dirichlet series in number theory (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-97127-8
Mardzhanishvili, K. K. (1985), "Ivan Matveevich Vinogradov: a brief outline of his life and works", I. M. Vinogradov, Selected Works, Berlin{{citation}}: CS1 maint: location missing publisher (link)
Rademacher, Hans (1943), "On the expansion of the partition function in a series", Annals of Mathematics, Second Series, 44 (3), The Annals of Mathematics, Vol. 44, No. 3: 416–422, doi:10.2307/1968973, JSTOR 1968973, MR 0008618
Vaughan, R. C. (1997), The Hardy–Littlewood Method, Cambridge Tracts in Mathematics, vol. 125 (2nd ed.), Cambridge University Press, ISBN 978-0-521-57347-4
== Further reading ==
Wang, Yuan (1991). Diophantine equations and inequalities in algebraic number fields. Berlin: Springer-Verlag. doi:10.1007/978-3-642-58171-7. ISBN 9783642634895. OCLC 851809136.
== External links ==
Terence Tao, Heuristic limitations of the circle method, a blog post in 2012 | Wikipedia/Hardy-Littlewood_circle_method |
In mathematics, the Dedekind eta function, named after Richard Dedekind, is a modular form of weight 1/2 and is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. It also occurs in bosonic string theory.
== Definition ==
For any complex number τ with Im(τ) > 0, let q = e2πiτ; then the eta function is defined by,
η
(
τ
)
=
e
π
i
τ
12
∏
n
=
1
∞
(
1
−
e
2
n
π
i
τ
)
=
q
1
24
∏
n
=
1
∞
(
1
−
q
n
)
.
{\displaystyle \eta (\tau )=e^{\frac {\pi i\tau }{12}}\prod _{n=1}^{\infty }\left(1-e^{2n\pi i\tau }\right)=q^{\frac {1}{24}}\prod _{n=1}^{\infty }\left(1-q^{n}\right).}
Raising the eta equation to the 24th power and multiplying by (2π)12 gives
Δ
(
τ
)
=
(
2
π
)
12
η
24
(
τ
)
{\displaystyle \Delta (\tau )=(2\pi )^{12}\eta ^{24}(\tau )}
where Δ is the modular discriminant. The presence of 24 can be understood by connection with other occurrences, such as in the 24-dimensional Leech lattice.
The eta function is holomorphic on the upper half-plane but cannot be continued analytically beyond it.
The eta function satisfies the functional equations
η
(
τ
+
1
)
=
e
π
i
12
η
(
τ
)
,
η
(
−
1
τ
)
=
−
i
τ
η
(
τ
)
.
{\displaystyle {\begin{aligned}\eta (\tau +1)&=e^{\frac {\pi i}{12}}\eta (\tau ),\\\eta \left(-{\frac {1}{\tau }}\right)&={\sqrt {-i\tau }}\,\eta (\tau ).\,\end{aligned}}}
In the second equation the branch of the square root is chosen such that √−iτ = 1 when τ = i.
More generally, suppose a, b, c, d are integers with ad − bc = 1, so that
τ
↦
a
τ
+
b
c
τ
+
d
{\displaystyle \tau \mapsto {\frac {a\tau +b}{c\tau +d}}}
is a transformation belonging to the modular group. We may assume that either c > 0, or c = 0 and d = 1. Then
η
(
a
τ
+
b
c
τ
+
d
)
=
ϵ
(
a
,
b
,
c
,
d
)
(
c
τ
+
d
)
1
2
η
(
τ
)
,
{\displaystyle \eta \left({\frac {a\tau +b}{c\tau +d}}\right)=\epsilon (a,b,c,d)\left(c\tau +d\right)^{\frac {1}{2}}\eta (\tau ),}
where
ϵ
(
a
,
b
,
c
,
d
)
=
{
e
b
i
π
12
c
=
0
,
d
=
1
,
e
i
π
(
a
+
d
12
c
−
s
(
d
,
c
)
−
1
4
)
c
>
0.
{\displaystyle \epsilon (a,b,c,d)={\begin{cases}e^{\frac {bi\pi }{12}}&c=0,\,d=1,\\e^{i\pi \left({\frac {a+d}{12c}}-s(d,c)-{\frac {1}{4}}\right)}&c>0.\end{cases}}}
Here s(h,k) is the Dedekind sum
s
(
h
,
k
)
=
∑
n
=
1
k
−
1
n
k
(
h
n
k
−
⌊
h
n
k
⌋
−
1
2
)
.
{\displaystyle s(h,k)=\sum _{n=1}^{k-1}{\frac {n}{k}}\left({\frac {hn}{k}}-\left\lfloor {\frac {hn}{k}}\right\rfloor -{\frac {1}{2}}\right).}
Because of these functional equations the eta function is a modular form of weight 1/2 and level 1 for a certain character of order 24 of the metaplectic double cover of the modular group, and can be used to define other modular forms. In particular the modular discriminant of the Weierstrass elliptic function with
ω
2
=
τ
ω
1
{\displaystyle \omega _{2}=\tau \omega _{1}}
can be defined as
Δ
(
τ
)
=
(
2
π
ω
1
)
12
η
(
τ
)
24
{\displaystyle \Delta (\tau )=(2\pi \omega _{1})^{12}\eta (\tau )^{24}\,}
and is a modular form of weight 12. Some authors omit the factor of (2π)12, so that the series expansion has integral coefficients.
The Jacobi triple product implies that the eta is (up to a factor) a Jacobi theta function for special values of the arguments:
η
(
τ
)
=
∑
n
=
1
∞
χ
(
n
)
exp
(
π
i
n
2
τ
12
)
,
{\displaystyle \eta (\tau )=\sum _{n=1}^{\infty }\chi (n)\exp \left({\frac {\pi in^{2}\tau }{12}}\right),}
where χ(n) is "the" Dirichlet character modulo 12 with χ(±1) = 1 and χ(±5) = −1. Explicitly,
η
(
τ
)
=
e
π
i
τ
12
ϑ
(
τ
+
1
2
;
3
τ
)
.
{\displaystyle \eta (\tau )=e^{\frac {\pi i\tau }{12}}\vartheta \left({\frac {\tau +1}{2}};3\tau \right).}
The Euler function
ϕ
(
q
)
=
∏
n
=
1
∞
(
1
−
q
n
)
=
q
−
1
24
η
(
τ
)
,
{\displaystyle {\begin{aligned}\phi (q)&=\prod _{n=1}^{\infty }\left(1-q^{n}\right)\\&=q^{-{\frac {1}{24}}}\eta (\tau ),\end{aligned}}}
has a power series by the Euler identity:
ϕ
(
q
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
q
3
n
2
−
n
2
.
{\displaystyle \phi (q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{\frac {3n^{2}-n}{2}}.}
Note that by using Euler Pentagonal number theorem for
I
(
τ
)
>
0
{\displaystyle {\mathfrak {I}}(\tau )>0}
, the eta function can be expressed as
η
(
τ
)
=
∑
n
=
−
∞
∞
e
π
i
n
e
3
π
i
(
n
+
1
6
)
2
τ
.
{\displaystyle \eta (\tau )=\sum _{n=-\infty }^{\infty }e^{\pi in}e^{3\pi i\left(n+{\frac {1}{6}}\right)^{2}\tau }.}
This can be proved by using
x
=
2
π
i
τ
{\displaystyle x=2\pi i\tau }
in Euler Pentagonal number theorem with the definition of eta function.
Another way to see the Eta function is through the following limit
lim
z
→
0
ϑ
1
(
z
|
τ
)
z
=
2
π
η
3
(
τ
)
{\displaystyle \lim _{z\to 0}{\frac {\vartheta _{1}(z|\tau )}{z}}=2\pi \eta ^{3}(\tau )}
Which alternatively is:
∑
n
=
0
∞
(
−
1
)
n
(
2
n
+
1
)
q
(
2
n
+
1
)
2
8
=
η
3
(
τ
)
{\displaystyle \sum _{n=0}^{\infty }(-1)^{n}(2n+1)q^{\frac {(2n+1)^{2}}{8}}=\eta ^{3}(\tau )}
Where
ϑ
1
(
z
|
τ
)
{\displaystyle \vartheta _{1}(z|\tau )}
is the Jacobi Theta function and
ϑ
1
(
z
|
τ
)
=
−
ϑ
11
(
z
;
τ
)
{\displaystyle \vartheta _{1}(z|\tau )=-\vartheta _{11}(z;\tau )}
Because the eta function is easy to compute numerically from either power series, it is often helpful in computation to express other functions in terms of it when possible, and products and quotients of eta functions, called eta quotients, can be used to express a great variety of modular forms.
The picture on this page shows the modulus of the Euler function: the additional factor of q1/24 between this and eta makes almost no visual difference whatsoever. Thus, this picture can be taken as a picture of eta as a function of q.
== Combinatorial identities ==
The theory of the algebraic characters of the affine Lie algebras gives rise to a large class of previously unknown identities for the eta function. These identities follow from the Weyl–Kac character formula, and more specifically from the so-called "denominator identities". The characters themselves allow the construction of generalizations of the Jacobi theta function which transform under the modular group; this is what leads to the identities. An example of one such new identity is
η
(
8
τ
)
η
(
16
τ
)
=
∑
m
,
n
∈
Z
m
≤
|
3
n
|
(
−
1
)
m
q
(
2
m
+
1
)
2
−
32
n
2
{\displaystyle \eta (8\tau )\eta (16\tau )=\sum _{m,n\in \mathbb {Z} \atop m\leq |3n|}(-1)^{m}q^{(2m+1)^{2}-32n^{2}}}
where q = e2πiτ is the q-analog or "deformation" of the highest weight of a module.
== Special values ==
From the above connection with the Euler function together with the special values of the latter, it can be easily deduced that
η
(
i
)
=
Γ
(
1
4
)
2
π
3
4
η
(
1
2
i
)
=
Γ
(
1
4
)
2
7
8
π
3
4
η
(
2
i
)
=
Γ
(
1
4
)
2
11
8
π
3
4
η
(
3
i
)
=
Γ
(
1
4
)
2
3
3
(
3
+
2
3
)
1
12
π
3
4
η
(
4
i
)
=
−
1
+
2
4
Γ
(
1
4
)
2
29
16
π
3
4
η
(
e
2
π
i
3
)
=
e
−
π
i
24
3
8
Γ
(
1
3
)
3
2
2
π
{\displaystyle {\begin{aligned}\eta (i)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2\pi ^{\frac {3}{4}}}}\\[6pt]\eta \left({\tfrac {1}{2}}i\right)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2^{\frac {7}{8}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta (2i)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2^{\frac {11}{8}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta (3i)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2{\sqrt[{3}]{3}}\left(3+2{\sqrt {3}}\right)^{\frac {1}{12}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta (4i)&={\frac {{\sqrt[{4}]{-1+{\sqrt {2}}}}\,\Gamma \left({\frac {1}{4}}\right)}{2^{\frac {29}{16}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta \left(e^{\frac {2\pi i}{3}}\right)&=e^{-{\frac {\pi i}{24}}}{\frac {{\sqrt[{8}]{3}}\,\Gamma \left({\frac {1}{3}}\right)^{\frac {3}{2}}}{2\pi }}\end{aligned}}}
== Eta quotients ==
Eta quotients are defined by quotients of the form
∏
0
<
d
∣
N
η
(
d
τ
)
r
d
{\displaystyle \prod _{0<d\mid N}\eta (d\tau )^{r_{d}}}
where d is a non-negative integer and rd is any integer. Linear combinations of eta quotients at imaginary quadratic arguments may be algebraic, while combinations of eta quotients may even be integral. For example, define,
j
(
τ
)
=
(
(
η
(
τ
)
η
(
2
τ
)
)
8
+
2
8
(
η
(
2
τ
)
η
(
τ
)
)
16
)
3
j
2
A
(
τ
)
=
(
(
η
(
τ
)
η
(
2
τ
)
)
12
+
2
6
(
η
(
2
τ
)
η
(
τ
)
)
12
)
2
j
3
A
(
τ
)
=
(
(
η
(
τ
)
η
(
3
τ
)
)
6
+
3
3
(
η
(
3
τ
)
η
(
τ
)
)
6
)
2
j
4
A
(
τ
)
=
(
(
η
(
τ
)
η
(
4
τ
)
)
4
+
4
2
(
η
(
4
τ
)
η
(
τ
)
)
4
)
2
=
(
η
2
(
2
τ
)
η
(
τ
)
η
(
4
τ
)
)
24
{\displaystyle {\begin{aligned}j(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (2\tau )}}\right)^{8}+2^{8}\left({\frac {\eta (2\tau )}{\eta (\tau )}}\right)^{16}\right)^{3}\\[6pt]j_{2A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (2\tau )}}\right)^{12}+2^{6}\left({\frac {\eta (2\tau )}{\eta (\tau )}}\right)^{12}\right)^{2}\\[6pt]j_{3A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (3\tau )}}\right)^{6}+3^{3}\left({\frac {\eta (3\tau )}{\eta (\tau )}}\right)^{6}\right)^{2}\\[6pt]j_{4A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (4\tau )}}\right)^{4}+4^{2}\left({\frac {\eta (4\tau )}{\eta (\tau )}}\right)^{4}\right)^{2}=\left({\frac {\eta ^{2}(2\tau )}{\eta (\tau )\,\eta (4\tau )}}\right)^{24}\end{aligned}}}
with the 24th power of the Weber modular function 𝔣(τ). Then,
j
(
1
+
−
163
2
)
=
−
640320
3
,
e
π
163
≈
640320
3
+
743.99999999999925
…
j
2
A
(
−
58
2
)
=
396
4
,
e
π
58
≈
396
4
−
104.00000017
…
j
3
A
(
1
+
−
89
3
2
)
=
−
300
3
,
e
π
89
3
≈
300
3
+
41.999971
…
j
4
A
(
−
7
2
)
=
2
12
,
e
π
7
≈
2
12
−
24.06
…
{\displaystyle {\begin{aligned}j\left({\frac {1+{\sqrt {-163}}}{2}}\right)&=-640320^{3},&e^{\pi {\sqrt {163}}}&\approx 640320^{3}+743.99999999999925\dots \\[6pt]j_{2A}\left({\frac {\sqrt {-58}}{2}}\right)&=396^{4},&e^{\pi {\sqrt {58}}}&\approx 396^{4}-104.00000017\dots \\[6pt]j_{3A}\left({\frac {1+{\sqrt {-{\frac {89}{3}}}}}{2}}\right)&=-300^{3},&e^{\pi {\sqrt {\frac {89}{3}}}}&\approx 300^{3}+41.999971\dots \\[6pt]j_{4A}\left({\frac {\sqrt {-7}}{2}}\right)&=2^{12},&e^{\pi {\sqrt {7}}}&\approx 2^{12}-24.06\dots \end{aligned}}}
and so on, values which appear in Ramanujan–Sato series.
Eta quotients may also be a useful tool for describing bases of modular forms, which are notoriously difficult to compute and express directly. In 1993 Basil Gordon and Kim Hughes proved that if an eta quotient ηg of the form given above, namely
∏
0
<
d
∣
N
η
(
d
τ
)
r
d
{\displaystyle \prod _{0<d\mid N}\eta (d\tau )^{r_{d}}}
satisfies
∑
0
<
d
∣
N
d
r
d
≡
0
(
mod
24
)
and
∑
0
<
d
∣
N
N
d
r
d
≡
0
(
mod
24
)
,
{\displaystyle \sum _{0<d\mid N}dr_{d}\equiv 0{\pmod {24}}\quad {\text{and}}\quad \sum _{0<d\mid N}{\frac {N}{d}}r_{d}\equiv 0{\pmod {24}},}
then ηg is a weight k modular form for the congruence subgroup Γ0(N) (up to holomorphicity) where
k
=
1
2
∑
0
<
d
∣
N
r
d
.
{\displaystyle k={\frac {1}{2}}\sum _{0<d\mid N}r_{d}.}
This result was extended in 2019 such that the converse holds for cases when N is coprime to 6, and it remains open that the original theorem is sharp for all integers N. This also extends to state that any modular eta quotient for any level n congruence subgroup must also be a modular form for the group Γ(N). While these theorems characterize modular eta quotients, the condition of holomorphicity must be checked separately using a theorem that emerged from the work of Gérard Ligozat and Yves Martin:
If ηg is an eta quotient satisfying the above conditions for the integer N and c and d are coprime integers, then the order of vanishing at the cusp c/d relative to Γ0(N) is
N
24
∑
0
<
δ
|
N
gcd
(
d
,
δ
)
2
r
δ
gcd
(
d
,
N
δ
)
d
δ
.
{\displaystyle {\frac {N}{24}}\sum _{0<\delta |N}{\frac {\gcd \left(d,\delta \right)^{2}r_{\delta }}{\gcd \left(d,{\frac {N}{\delta }}\right)d\delta }}.}
These theorems provide an effective means of creating holomorphic modular eta quotients, however this may not be sufficient to construct a basis for a vector space of modular forms and cusp forms. A useful theorem for limiting the number of modular eta quotients to consider states that a holomorphic weight k modular eta quotient on Γ0(N) must satisfy
∑
0
<
d
∣
N
|
r
d
|
≤
∏
p
∣
N
(
p
+
1
p
−
1
)
min
(
2
,
ord
p
(
N
)
)
,
{\displaystyle \sum _{0<d\mid N}|r_{d}|\leq \prod _{p\mid N}\left({\frac {p+1}{p-1}}\right)^{\min {\bigl (}2,{\text{ord}}_{p}(N){\bigr )}},}
where ordp(N) denotes the largest integer m such that pm divides N.
These results lead to several characterizations of spaces of modular forms that can be spanned by modular eta quotients. Using the graded ring structure on the ring of modular forms, we can compute bases of vector spaces of modular forms composed of
C
{\displaystyle \mathbb {C} }
-linear combinations of eta-quotients. For example, if we assume N = pq is a semiprime then the following process can be used to compute an eta-quotient basis of Mk(Γ0(N)).
A collection of over 6300 product identities for the Dedekind eta function in a canonical, standardized form is available at the Wayback machine of Michael Somos' website.
== See also ==
Chowla–Selberg formula
Ramanujan–Sato series
q-series
Weierstrass elliptic function
Partition function
Kronecker limit formula
Affine Lie algebra
== References ==
== Further reading ==
Apostol, Tom M. (1990). Modular functions and Dirichlet Series in Number Theory. Graduate Texts in Mathematics. Vol. 41 (2nd ed.). Springer-Verlag. ch. 3. ISBN 3-540-97127-0.
Koblitz, Neal (1993). Introduction to Elliptic Curves and Modular Forms. Graduate Texts in Mathematics. Vol. 97 (2nd ed.). Springer-Verlag. ISBN 3-540-97966-2. | Wikipedia/Dedekind_eta_function |
In mathematics, generalized functions are objects extending the notion of functions on real or complex numbers. There is more than one recognized theory, for example the theory of distributions. Generalized functions are especially useful for treating discontinuous functions more like smooth functions, and describing discrete physical phenomena such as point charges. They are applied extensively, especially in physics and engineering. Important motivations have been the technical requirements of theories of partial differential equations and group representations.
A common feature of some of the approaches is that they build on operator aspects of everyday, numerical functions. The early history is connected with some ideas on operational calculus, and some contemporary developments are closely related to Mikio Sato's algebraic analysis.
== Some early history ==
In the mathematics of the nineteenth century, aspects of generalized function theory appeared, for example in the definition of the Green's function, in the Laplace transform, and in Riemann's theory of trigonometric series, which were not necessarily the Fourier series of an integrable function. These were disconnected aspects of mathematical analysis at the time.
The intensive use of the Laplace transform in engineering led to the heuristic use of symbolic methods, called operational calculus. Since justifications were given that used divergent series, these methods were questionable from the point of view of pure mathematics. They are typical of later application of generalized function methods. An influential book on operational calculus was Oliver Heaviside's Electromagnetic Theory of 1899.
When the Lebesgue integral was introduced, there was for the first time a notion of generalized function central to mathematics. An integrable function, in Lebesgue's theory, is equivalent to any other which is the same almost everywhere. That means its value at each point is (in a sense) not its most important feature. In functional analysis a clear formulation is given of the essential feature of an integrable function, namely the way it defines a linear functional on other functions. This allows a definition of weak derivative.
During the late 1920s and 1930s further basic steps were taken. The Dirac delta function was boldly defined by Paul Dirac (an aspect of his scientific formalism); this was to treat measures, thought of as densities (such as charge density) like genuine functions. Sergei Sobolev, working in partial differential equation theory, defined the first rigorous theory of generalized functions in order to define weak solutions of partial differential equations (i.e. solutions which are generalized functions, but may not be ordinary functions). Others proposing related theories at the time were Salomon Bochner and Kurt Friedrichs. Sobolev's work was extended by Laurent Schwartz.
== Schwartz distributions ==
The most definitive development was the theory of distributions developed by Laurent Schwartz, systematically working out the principle of duality for topological vector spaces. Its main rival in applied mathematics is mollifier theory, which uses sequences of smooth approximations (the 'James Lighthill' explanation).
This theory was very successful and is still widely used, but suffers from the main drawback that distributions cannot usually be multiplied: unlike most classical function spaces, they do not form an algebra. For example, it is meaningless to square the Dirac delta function. Work of Schwartz from around 1954 showed this to be an intrinsic difficulty.
== Algebras of generalized functions ==
Some solutions to the multiplication problem have been proposed. One is based on a simple definition of by Yu. V. Egorov (see also his article in Demidov's book in the book list below) that allows arbitrary operations on, and between, generalized functions.
Another solution allowing multiplication is suggested by the path integral formulation of quantum mechanics.
Since this is required to be equivalent to the Schrödinger theory of quantum mechanics which is invariant under coordinate transformations, this property must be shared by path integrals. This fixes all products of generalized functions
as shown by H. Kleinert and A. Chervyakov. The result is equivalent to what can be derived from
dimensional regularization.
Several constructions of algebras of generalized functions have been proposed, among others those by Yu. M. Shirokov
and those by E. Rosinger, Y. Egorov, and R. Robinson.
In the first case, the multiplication is determined with some regularization of generalized function. In the second case, the algebra is constructed as multiplication of distributions. Both cases are discussed below.
=== Non-commutative algebra of generalized functions ===
The algebra of generalized functions can be built-up with an appropriate procedure of projection of a function
F
=
F
(
x
)
{\displaystyle F=F(x)}
to its smooth
F
s
m
o
o
t
h
{\displaystyle F_{\rm {smooth}}}
and its singular
F
s
i
n
g
u
l
a
r
{\displaystyle F_{\rm {singular}}}
parts. The product of generalized functions
F
{\displaystyle F}
and
G
{\displaystyle G}
appears as
Such a rule applies to both the space of main functions and the space of operators which act on the space of the main functions.
The associativity of multiplication is achieved; and the function signum is defined in such a way, that its square is unity everywhere (including the origin of coordinates). Note that the product of singular parts does not appear in the right-hand side of (1); in particular,
δ
(
x
)
2
=
0
{\displaystyle \delta (x)^{2}=0}
. Such a formalism includes the conventional theory of generalized functions (without their product) as a special case. However, the resulting algebra is non-commutative: generalized functions signum and delta anticommute. Few applications of the algebra were suggested.
=== Multiplication of distributions ===
The problem of multiplication of distributions, a limitation of the Schwartz distribution theory, becomes serious for non-linear problems.
Various approaches are used today. The simplest one is based on the definition of generalized function given by Yu. V. Egorov. Another approach to construct associative differential algebras is based on J.-F. Colombeau's construction: see Colombeau algebra. These are factor spaces
G
=
M
/
N
{\displaystyle G=M/N}
of "moderate" modulo "negligible" nets of functions, where "moderateness" and "negligibility" refers to growth with respect to the index of the family.
=== Example: Colombeau algebra ===
A simple example is obtained by using the polynomial scale on N,
s
=
{
a
m
:
N
→
R
,
n
↦
n
m
;
m
∈
Z
}
{\displaystyle s=\{a_{m}:\mathbb {N} \to \mathbb {R} ,n\mapsto n^{m};~m\in \mathbb {Z} \}}
. Then for any semi normed algebra (E,P), the factor space will be
G
s
(
E
,
P
)
=
{
f
∈
E
N
∣
∀
p
∈
P
,
∃
m
∈
Z
:
p
(
f
n
)
=
o
(
n
m
)
}
{
f
∈
E
N
∣
∀
p
∈
P
,
∀
m
∈
Z
:
p
(
f
n
)
=
o
(
n
m
)
}
.
{\displaystyle G_{s}(E,P)={\frac {\{f\in E^{\mathbb {N} }\mid \forall p\in P,\exists m\in \mathbb {Z} :p(f_{n})=o(n^{m})\}}{\{f\in E^{\mathbb {N} }\mid \forall p\in P,\forall m\in \mathbb {Z} :p(f_{n})=o(n^{m})\}}}.}
In particular, for (E, P)=(C,|.|) one gets (Colombeau's) generalized complex numbers (which can be "infinitely large" and "infinitesimally small" and still allow for rigorous arithmetics, very similar to nonstandard numbers). For (E, P) = (C∞(R),{pk}) (where pk is the supremum of all derivatives of order less than or equal to k on the ball of radius k) one gets Colombeau's simplified algebra.
=== Injection of Schwartz distributions ===
This algebra "contains" all distributions T of D' via the injection
j(T) = (φn ∗ T)n + N,
where ∗ is the convolution operation, and
φn(x) = n φ(nx).
This injection is non-canonical in the sense that it depends on the choice of the mollifier φ, which should be C∞, of integral one and have all its derivatives at 0 vanishing. To obtain a canonical injection, the indexing set can be modified to be N × D(R), with a convenient filter base on D(R) (functions of vanishing moments up to order q).
=== Sheaf structure ===
If (E,P) is a (pre-)sheaf of semi normed algebras on some topological space X, then Gs(E, P) will also have this property. This means that the notion of restriction will be defined, which allows to define the support of a generalized function w.r.t. a subsheaf, in particular:
For the subsheaf {0}, one gets the usual support (complement of the largest open subset where the function is zero).
For the subsheaf E (embedded using the canonical (constant) injection), one gets what is called the singular support, i.e., roughly speaking, the closure of the set where the generalized function is not a smooth function (for E = C∞).
=== Microlocal analysis ===
The Fourier transformation being (well-)defined for compactly supported generalized functions (component-wise), one can apply the same construction as for distributions, and define Lars Hörmander's wave front set also for generalized functions.
This has an especially important application in the analysis of propagation of singularities.
== Other theories ==
These include: the convolution quotient theory of Jan Mikusinski, based on the field of fractions of convolution algebras that are integral domains; and the theories of hyperfunctions, based (in their initial conception) on boundary values of analytic functions, and now making use of sheaf theory.
== Topological groups ==
Bruhat introduced a class of test functions, the Schwartz–Bruhat functions, on a class of locally compact groups that goes beyond the manifolds that are the typical function domains. The applications are mostly in number theory, particularly to adelic algebraic groups. André Weil rewrote Tate's thesis in this language, characterizing the zeta distribution on the idele group; and has also applied it to the explicit formula of an L-function.
== Generalized section ==
A further way in which the theory has been extended is as generalized sections of a smooth vector bundle. This is on the Schwartz pattern, constructing objects dual to the test objects, smooth sections of a bundle that have compact support. The most developed theory is that of De Rham currents, dual to differential forms. These are homological in nature, in the way that differential forms give rise to De Rham cohomology. They can be used to formulate a very general Stokes' theorem.
== See also ==
Beppo-Levi space
Dirac delta function
Generalized eigenfunction
Distribution (mathematics)
Hyperfunction
Laplacian of the indicator
Rigged Hilbert space
Limit of a distribution
Generalized space
Ultradistribution
== Books ==
Schwartz, L. (1950). Théorie des distributions. Vol. 1. Paris: Hermann. OCLC 889264730. Vol. 2. OCLC 889391733
Beurling, A. (1961). On quasianalyticity and general distributions (multigraphed lectures). Summer Institute, Stanford University. OCLC 679033904.
Gelʹfand, Izrailʹ Moiseevič; Vilenkin, Naum Jakovlevič (1964). Generalized Functions. Vol. I–VI. Academic Press. OCLC 728079644.
Hörmander, L. (2015) [1990]. The Analysis of Linear Partial Differential Operators (2nd ed.). Springer. ISBN 978-3-642-61497-2.
H. Komatsu, Introduction to the theory of distributions, Second edition, Iwanami Shoten, Tokyo, 1983.
Colombeau, J.-F. (2000) [1983]. New Generalized Functions and Multiplication of Distributions. Elsevier. ISBN 978-0-08-087195-0.
Vladimirov, V.S.; Drozhzhinov, Yu. N.; Zav’yalov, B.I. (2012) [1988]. Tauberian theorems for generalized functions. Springer. ISBN 978-94-009-2831-2.
Oberguggenberger, M. (1992). Multiplication of distributions and applications to partial differential equations. Longman. ISBN 978-0-582-08733-0. OCLC 682138968.
Morimoto, M. (1993). An introduction to Sato's hyperfunctions. American Mathematical Society. ISBN 978-0-8218-8767-7.
Demidov, A.S. (2001). Generalized Functions in Mathematical Physics: Main Ideas and Concepts. Nova Science. ISBN 9781560729051.
Grosser, M.; Kunzinger, M.; Oberguggenberger, Michael; Steinbauer, R. (2013) [2001]. Geometric theory of generalized functions with applications to general relativity. Springer. ISBN 978-94-015-9845-3.
Estrada, R.; Kanwal, R. (2012). A distributional approach to asymptotics. Theory and applications (2nd ed.). Birkhäuser Boston. ISBN 978-0-8176-8130-2.
Vladimirov, V.S. (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 978-0-415-27356-5.
Kleinert, H. (2009). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets (5th ed.). World Scientific. ISBN 9789814273572. (online here). See Chapter 11 for products of generalized functions.
Pilipovi, S.; Stankovic, B.; Vindas, J. (2012). Asymptotic behavior of generalized functions. World Scientific. ISBN 9789814366847.
== References == | Wikipedia/Generalised_function |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.