text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Comma-separated values (CSV) is a text file format that uses commas to separate values, and newlines to separate records. A CSV file stores tabular data (numbers and text) in plain text, where each line of the file typically represents one data record. Each record consists of the same number of fields, and these are separated by commas in the CSV file. If the field delimiter itself may appear within a field, fields can be surrounded with quotation marks.
The CSV file format is one type of delimiter-separated file format. Delimiters frequently used include the comma, tab, space, and semicolon. Delimiter-separated files are often given a ".csv" extension even when the field separator is not a comma. Many applications or libraries that consume or produce CSV files have options to specify an alternative delimiter.
The lack of adherence to the CSV standard RFC 4180 necessitates the support for a variety of CSV formats in data input software. Despite this drawback, CSV remains widespread in data applications and is widely supported by a variety of software, including common spreadsheet applications such as Microsoft Excel. Benefits cited in favor of CSV include human readability and the simplicity of the format.
== Applications ==
CSV is a common data exchange format that is widely supported by consumer, business, and scientific applications. Among its most common uses is moving tabular data between programs that natively operate on incompatible (often proprietary or undocumented) formats. For example, a user may need to transfer information from a database program that stores data in a proprietary format, to a spreadsheet that uses a completely different format. Most database programs can export data as CSV. Most spreadsheet programs can read CSV data, allowing CSV to be used as an intermediate format when transferring data from a database to a spreadsheet. Every major ecommerce platform provides support for exporting data as a CSV file.
CSV is also used for storing data. Common data science tools such as Pandas include the option to export data to CSV for long-term storage. Benefits of CSV for data storage include the simplicity of CSV makes parsing and creating CSV files easy to implement and fast compared to other data formats, human readability making editing or fixing data simpler, and high compressibility leading to smaller data files. Alternatively, CSV does not support more complex data relations and makes no distinction between null and empty values, and in applications where these features are needed other formats are preferred.
More than 200 local, regional, and national data portals, such as those of the UK government and the European Commission, use CSV files with standardized data catalogs.
== Specification ==
RFC 4180 proposes a specification for the CSV format; however, actual practice often does not follow the RFC and the term "CSV" might refer to any file that:
is plain text using a character encoding such as ASCII, various Unicode character encodings (e.g. UTF-8), EBCDIC, or Shift JIS,
consists of records (typically one record per line),
with the records divided into fields separated by a comma,
where every record has the same sequence of fields.
Within these general constraints, many variations are in use. Therefore, without additional information (such as whether RFC 4180 is honored), a file claimed simply to be in "CSV" format is not fully specified.
== History ==
Comma-separated values is a data format that predates personal computers by more than a decade: the IBM Fortran (level H extended) compiler under OS/360 supported CSV in 1972. List-directed ("free form") input/output was defined in FORTRAN 77, approved in 1978. List-directed input used commas or spaces for delimiters, so unquoted character strings could not contain commas or spaces.
The term "comma-separated value" and the "CSV" abbreviation were in use by 1983. The manual for the Osborne Executive computer, which bundled the SuperCalc spreadsheet, documents the CSV quoting convention that allows strings to contain embedded commas, but the manual does not specify a convention for embedding quotation marks within quoted strings.
Comma-separated value lists are easier to type (for example into punched cards) than fixed-column-aligned data, and they were less prone to producing incorrect results if a value was punched one column off from its intended location.
Comma separated files are used for the interchange of database information between machines of two different architectures. The plain-text character of CSV files largely avoids incompatibilities such as byte-order and word size. The files are largely human-readable, so it is easier to deal with them in the absence of perfect documentation or communication.
The main standardization initiative—transforming "de facto fuzzy definition" into a more precise and de jure one—was in 2005, with RFC 4180, defining CSV as a MIME Content Type. Later, in 2013, some of RFC 4180's deficiencies were tackled by a W3C recommendation.
In 2014 IETF published RFC 7111 describing the application of URI fragments to CSV documents. RFC 7111 specifies how row, column, and cell ranges can be selected from a CSV document using position indexes.
In 2015 W3C, in an attempt to enhance CSV with formal semantics, publicized the first drafts of recommendations for CSV metadata standards, which began as recommendations in December of the same year.
== General functionality ==
CSV formats are best used to represent sets or sequences of records in which each record has an identical list of fields. This corresponds to a single relation in a relational database, or to data (though not calculations) in a typical spreadsheet.
The format dates back to the early days of business computing and is widely used to pass data between computers with different internal word sizes, data formatting needs, and so forth. For this reason, CSV files are common on all computer platforms.
CSV is a delimited text file that uses a comma to separate values (many implementations of CSV import/export tools allow other separators to be used; for example, the use of a "Sep=^" row as the first row in the *.csv file will cause Excel to open the file expecting caret "^" to be the separator instead of comma ","). Simple CSV implementations may prohibit field values that contain a comma or other special characters such as newlines. More sophisticated CSV implementations permit them, often by requiring " (double quote) characters around values that contain reserved characters (such as commas, double quotes, or less commonly, newlines). Embedded double quote characters may then be represented by a pair of consecutive double quotes, or by prefixing a double quote with an escape character such as a backslash (for example in Sybase Central).
CSV formats are not limited to a particular character set. They work just as well with Unicode character sets (such as UTF-8 or UTF-16) as with ASCII (although particular programs that support CSV may have their own limitations). CSV files normally will even survive naïve translation from one character set to another (unlike nearly all proprietary data formats). CSV does not, however, provide any way to indicate what character set is in use, so that must be communicated separately, or determined at the receiving end (if possible).
Databases that include multiple relations cannot be exported as a single CSV file. Similarly, CSV cannot naturally represent hierarchical or object-oriented data. This is because every CSV record is expected to have the same structure. CSV is therefore rarely appropriate for documents created with HTML, XML, or other markup or word-processing technologies.
Statistical databases in various fields often have a generally relation-like structure, but with some repeatable groups of fields. For example, health databases such as the Demographic and Health Survey typically repeat some questions for each child of a given parent (perhaps up to a fixed maximum number of children). Statistical analysis systems often include utilities that can "rotate" such data; for example, a "parent" record that includes information about five children can be split into five separate records, each containing (a) the information on one child, and (b) a copy of all the non-child-specific information. CSV can represent either the "vertical" or "horizontal" form of such data.
In a relational database, similar issues are readily handled by creating a separate relation for each such group, and connecting "child" records to the related "parent" records using a foreign key (such as an ID number or name for the parent). In markup languages such as XML, such groups are typically enclosed within a parent element and repeated as necessary (for example, multiple <child> nodes within a single <parent> node). With CSV there is no widely accepted single-file solution.
== Standardization ==
The name "CSV" indicates the use of the comma to separate data fields. Nevertheless, the term "CSV" is widely used to refer to a large family of formats that differ in many ways. Some implementations allow or require single or double quotation marks around some or all fields; and some reserve the first record as a header containing a list of field names. The character set being used is undefined: some applications require a Unicode byte order mark (BOM) to enforce Unicode interpretation (sometimes even a UTF-8 BOM). Files that use the tab character instead of comma can be more precisely referred to as "TSV" for tab-separated values.
Other implementation differences include the handling of more commonplace field separators (such as space or semicolon) and newline characters inside text fields. One more subtlety is the interpretation of a blank line: it can equally be the result of writing a record of zero fields, or a record of one field of zero length; thus decoding it is ambiguous.
=== RFC 4180 and MIME standards ===
The 2005 technical standard RFC 4180 formalizes the CSV file format and defines the MIME type "text/csv" for the handling of text-based fields. However, the interpretation of the text of each field is still application-specific. Files that follow the RFC 4180 standard can simplify CSV exchange and should be widely portable. Among its requirements:
MS-DOS-style lines that end with (CR/LF) characters (optional for the last line).
An optional header record (there is no sure way to detect whether it is present, so care is required when importing).
Each record should contain the same number of comma-separated fields.
Any field may be quoted (with double quotes).
Fields containing a line-break, double-quote or commas should be quoted. (If they are not, the file will likely be impossible to process correctly.)
If double-quotes are used to enclose fields, then a double-quote in a field must be represented by two double-quote characters.
The format can be processed by most programs that claim to read CSV files. The exceptions are (a) programs may not support line-breaks within quoted fields, (b) programs may confuse the optional header with data or interpret the first data line as an optional header, and (c) double-quotes in a field may not be parsed correctly automatically.
=== OKF frictionless tabular data package ===
In 2011 Open Knowledge Foundation (OKF) and various partners created a data protocols working group, which later evolved into the Frictionless Data initiative. One of the main formats they released was the Tabular Data Package. Tabular Data package was heavily based on CSV, using it as the main data transport format and adding basic type and schema metadata (CSV lacks any type information to distinguish the string "1" from the number 1).
The Frictionless Data Initiative has also provided a standard CSV Dialect Description Format for describing different dialects of CSV, for example specifying the field separator or quoting rules.
=== W3C tabular data standard ===
In 2013 the W3C "CSV on the Web" working group began to specify technologies providing higher interoperability for web applications using CSV or similar formats. The working group completed its work in February 2016 and is officially closed in March 2016 with the release of a set of documents and W3C recommendations
for modeling "Tabular Data", and enhancing CSV with metadata and semantics.
While the well-formedness of CSV data can readily checked, testing validity and canonical form is less well developed, relative to more precise data formats, such as XML and SQL, which offer richer types and rules-based validation.
== Basic rules ==
Many informal documents exist that describe "CSV" formats.
IETF RFC 4180 (summarized above) defines the format for the "text/csv" MIME type registered with the IANA.
Rules typical of these and other "CSV" specifications and implementations are as follows:
== Example ==
The above table of data may be represented in CSV format as follows:
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""","",5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
Example of a USA/UK CSV file (where the decimal separator is a period/full stop and the value separator is a comma):
Year,Make,Model,Length
1997,Ford,E350,2.35
2000,Mercury,Cougar,2.38
Example of an analogous European CSV/DSV file (where the decimal separator is a comma and the value separator is a semicolon):
Year;Make;Model;Length
1997;Ford;E350;2,35
2000;Mercury;Cougar;2,38
The latter format is not RFC 4180 compliant. Compliance could be achieved by the use of a comma instead of a semicolon as a separator and by quoting all numbers that have a decimal mark.
== Application support ==
Some applications use CSV as a data interchange format to enhance its interoperability, exporting and importing CSV. Others use CSV as an internal format.
As a data interchange format: the CSV file format is supported by almost all spreadsheets and database management systems,
Spreadsheets including Apple Numbers, LibreOffice Calc, and Apache OpenOffice Calc. Microsoft Excel also supports a dialect of CSV with restrictions in comparison to other spreadsheet software (e.g., as of 2019 Excel still cannot export CSV files in the commonly used UTF-8 character encoding, and separator is not enforced to be the comma). LibreOffice Calc CSV importer is actually a more generic delimited text importer, supporting multiple separators at the same time as well as field trimming.
Various Relational databases support saving query results to a CSV file. PostgreSQL provides the COPY command, which allows for both saving and loading data to and from a file. COPY (SELECT * FROM articles) TO '/home/wikipedia/file.csv' (FORMAT csv) saves the content of a table articles to a file called /home/wikipedia/file.csv.
Many utility programs on Unix-style systems (such as cut, paste, join, sort, uniq, awk) can split files on a comma delimiter, and can therefore process simple CSV files. However, this method does not correctly handle commas or new lines within quoted strings, hence it is better to use tools like csvkit or Miller.
As (main or optional) internal representation. Can be native or foreign, but differ from interchange format ("export/import only") because it is not necessary to create a copy in another format:
Some Spreadsheets including LibreOffice Calc offers this option, without enforcing user to adopt another format.
Some relational databases, when using standard SQL, offer foreign-data wrapper (FDW). For example, PostgreSQL offers the CREATE FOREIGN TABLE and CREATE EXTENSION file_fdw commands to configure any variant of CSV.
Databases like Apache Hive offer the option to express CSV or .csv.gz as an internal table format.
The emacs editor can operate on CSV files using csv-nav mode.
CSV format is supported by libraries available for many programming languages. Most provide some way to specify the field delimiter, decimal separator, character encoding, quoting conventions, date format, etc.
=== Software and row limits ===
Programs that work with CSV may have limits on the maximum number of rows CSV files can have.
Below is a list of common software and its limitations:
Microsoft Excel: 1,048,576 row limit;
Microsoft PowerShell, no row or cell limit. (Memory Limited)
Apple Numbers: 1,000,000 row limit;
Google Sheets: 10,000,000 cell limit (the product of columns and rows);
OpenOffice and LibreOffice: 1,048,576 row limit;
Sourcetable: no row limit. (Spreadsheet-database hybrid);
Text Editors (such as WordPad, TextEdit, Vim, etc.): no row or cell limit;
Databases (COPY command and FDW): no row or cell limit.
== See also ==
Tab-separated values
Comparison of data-serialization formats
Delimiter-separated values
Delimiter collision
Flat-file database
Simple Data Format
Substitute character, Null character, invisible comma U+2063
== References ==
== Further reading ==
"IBM DB2 Administration Guide - LOAD, IMPORT, and EXPORT File Formats". IBM. Archived from the original on 2016-12-13. Retrieved 2016-12-12. (Has file descriptions of delimited ASCII (.DEL) (including comma- and semicolon-separated) and non-delimited ASCII (.ASC) files for data transfer.) | Wikipedia/Comma_separated_values |
A bibliographic record is an entry in a bibliographic index (or a library catalog) which represents and describes a specific resource. A bibliographic record contains the data elements necessary to help users identify and retrieve that resource, as well as additional supporting information, presented in a formalized bibliographic format. Additional information may support particular database functions such as search, or browse (e.g., by keywords), or may provide fuller presentation of the content item (e.g., the article's abstract).
Bibliographic records are usually retrievable from bibliographic indexes (e.g., contemporary bibliographic databases) by author, title, index term, or keyword. Bibliographic records can also be referred to as surrogate records or metadata. Bibliographic records can represent a wide variety of published contents, including traditional paper, digitized, or born-digital publications. The process of creation, exchange, and preservation of bibliographic records are parts of a larger process, called bibliographic control.
== History ==
The earliest known bibliographic records come from the catalogues (written in cuneiform script on clay tablets) of religious texts from 2000 B.C., that were identified by what appear to be key words in Sumerian. In ancient Greece, Callimachus of Cyrene recorded bibliographic records on 120 scrolls using a system called pinakes.
Early American library catalogs in the colonial period were typically made available in book form, either manuscript or printed. In modern America, the title and author of a work were enough to distinguish it among others and order its record within a collection. However, as more and different kinds of resources arose, it became necessary to collect more information to distinguish them from one another. This conceptual framework of the bibliographic record as a collection of data elements served American librarianship well in its first one-hundred years. Challenges to the current method have arisen in the form of new and different distribution methods, especially of the digital variety, and raise questions about whether the traditional conceptual model is still relevant and applicable.
== Formats ==
Today's bibliographic record formats originate from the times of the traditional paper-based isolated libraries, their self-contained collections and their corresponding library cataloguing systems. The modern formats, while reflecting this heritage in their structure, are machine-readable and most commonly conform to the MARC standards.
The subject bibliography databases (such as Chemical Abstracts, Medline, PsycInfo, or Web of Science) do not use the same kinds of bibliographical standards as does the library community. In this context, the Common Communication Format is the best known standard.
The Library of Congress is currently developing BIBFRAME, a new RDF schema for expressing bibliographic data.
BIBFRAME is still in draft form, but several libraries are already testing cataloging under the new format.
BIBFRAME is particularly noteworthy because it describes resources using a number of different entities and relationships, unlike standard library records, which aggregate many types of information into a single independently understandable record.
The digital catalog of the National Library of France has the peculiarity to report notes about access and restrictions as well as the physical collocation of any single paper copy of each title, that exists in one of the libraries associated to their keeping system. This set of metadata allows to enforce the long-term digital preservation and content availability.
== References == | Wikipedia/Bibliographic_record |
A bibliographic index is a bibliography intended to help find a publication. Citations are usually listed by author and subject in separate sections, or in a single alphabetical sequence under a system of authorized headings collectively known as controlled vocabulary, developed over time by the indexing service. Indexes of this kind are issued in print periodical form (issued in monthly or quarterly paperback supplements, cumulated annually), online, or both. Since the 1970s, they are typically generated as output from bibliographic databases (whereas earlier they were manually compiled using index cards).
"From many points of view an index is synonymous with a catalogue, the principles of analysis used being identical, but whereas an index entry merely locates a subject, a catalogue entry includes descriptive specification of a document concerned with the subject".
The index may help search the literature of, for example, an academic field or discipline (example: Philosopher's Index), to works of a specific literary form (Biography Index) or published in a specific format (Newspaper Abstracts), or to the analyzed contents of a serial publication (New York Times Index).
== See also ==
Citation index
Guide to information sources
Indexing and abstracting service
Library catalog
List of academic databases and search engines
Metabibliography
Metadata registry
Subject index
== References == | Wikipedia/Bibliographic_index |
In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
Before digital storage and retrieval of data have become widespread, index cards were used for data storage in a wide range of applications and environments: in the home to record and store recipes, shopping lists, contact information and other organizational data; in business to record presentation notes, project research and notes, and contact information; in schools as flash cards or other visual aids; and in academic research to hold data such as bibliographical citations or notes in a card file. Professional book indexers used index cards in the creation of book indexes until they were replaced by indexing software in the 1980s and 1990s.
Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spans formal techniques and practical considerations, including data modeling, efficient data representation and storage, query languages, security and privacy of sensitive data, and distributed computing issues, including supporting concurrent access and fault tolerance.
Computer scientists may classify database management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, collectively referred to as NoSQL, because they use different query languages.
== Terminology and overview ==
Formally, a "database" refers to a set of related data accessed through the use of a "database management system" (DBMS), which is an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that detail how the data is to be organized.
Update – Insertion, modification, and deletion of the data itself.
Retrieval – Selecting data according to specified criteria (e.g., a query, a position in a hierarchy, or a position in relation to other data) and providing that data either directly to the user, or making it available for further processing by the database itself or by other applications. The retrieved data may be made available in a more or less direct form without modification, as it is stored in the database, or in a new form obtained by altering it or combining it with existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large-volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
== History ==
The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid-1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2018 they remain dominant: IBM Db2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardized SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object–relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object–relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key–value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
=== 1960s, navigational DBMS ===
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However, CODASYL databases were complex and required significant training and effort to produce useful applications.
IBM also had its own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman's 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems' TOTAL databases are classified as network databases. IMS remains in use as of 2014.
=== 1970s, relational DBMS ===
Edgar F. Codd worked at IBM in San Jose, California, in an office primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
The paper described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organize the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated.
Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based.
The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit.
In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic.
Codd's paper inspired teams at various universities to research the subject, including one at University of California, Berkeley led by Eugene Wong and Michael Stonebraker, who started INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. The university in 1974 hosted a debate between Codd and Bachman which Bruce Lindsay of IBM later described as "throwing lightning bolts at each other!". MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998.
=== Integrated approach ===
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at a lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However, this idea is still pursued in certain applications by some companies like Netezza and Oracle (Exadata).
=== Late 1970s, SQL DBMS ===
IBM formed a team led by Codd that started working on a prototype system, System R despite opposition from others at the company. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (IBM Db2).
Larry Ellison's Oracle Database (or more simply, Oracle) started from a different chain, based on IBM's papers on System R. Though Oracle V1 implementations were completed in 1978, it was not until Oracle Version 2 when Ellison beat IBM to market in 1979.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed in the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two has become irrelevant.
=== 1980s, on the desktop ===
Besides IBM and various software companies such as Sybase and Informix Corporation, most large computer hardware vendors by the 1980s had their own database systems such as DEC's VAX Rdb/VMS. The decade ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
=== 1990s, object-oriented ===
By the start of the decade databases had become a billion-dollar industry in about ten years. The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be related to objects and their attributes and not to individual fields. The term "object–relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object–relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object–relational mappings (ORMs) attempt to solve the same problem.
=== 2000s, NoSQL and NewSQL ===
Database sales grew rapidly during the dotcom bubble and, after its end, the rise of ecommerce. The popularity of open source databases such as MySQL has grown since 2000, to the extent that Ken Jacobs of Oracle said in 2005 that perhaps "these guys are doing to us what we did to IBM".
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally.
In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem, it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system.
== Use cases ==
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
== Classification ==
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and used by end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application's end-users and requires little or no ongoing maintenance.
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym for federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows modeling, storage, and retrieval of (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or Δ) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business's customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared-nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?".
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
== Database management system ==
Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database." Examples of DBMS's include MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle Database, and Microsoft Access.
The DBMS acronym is sometimes extended to indicate the underlying database model, with RDBMS for the relational, OODBMS for the object (oriented) and ORDBMS for the object–relational model. Other extensions can indicate some other characteristics, such as DDBMS for a distributed database management systems.
The functionality provided by a DBMS can vary enormously. The core functionality is the storage, retrieval and update of data. Codd proposed the following functions and services a fully-fledged general purpose DBMS should provide:
Data storage, retrieval and update
User accessible catalog or data dictionary describing the metadata
Support for transactions and concurrency
Facilities for recovering the database should it become damaged
Support for authorization of access and update of data
Access support from remote locations
Enforcing constraints to ensure data in the database abides by certain rules
It is also generally to be expected the DBMS will provide a set of utilities for such purposes as may be necessary to administer the database effectively, including import, export, monitoring, defragmentation and analysis utilities. The core part of the DBMS interacting between the database and the application interface sometimes referred to as the database engine.
Often DBMSs will have configuration parameters that can be statically and dynamically tuned, for example the maximum amount of main memory on a server the database can use. The trend is to minimize the amount of manual configuration, and for cases such as embedded databases the need to target zero-administration is paramount.
The large major enterprise DBMSs have tended to increase in size and functionality and have involved up to thousands of human years of development effort throughout their lifetime.
Early multi-user DBMS typically only allowed for the application to reside on the same computer with access via terminals or terminal emulation software. The client–server architecture was a development where the application resided on a client desktop and the database on a server allowing the processing to be distributed. This evolved into a multitier architecture incorporating application servers and web servers with the end user interface via a web browser with the database only directly connected to the adjacent tier.
A general-purpose DBMS will provide public application programming interfaces (API) and optionally a processor for database languages such as SQL to allow applications to be written to interact with and manipulate the database. A special purpose DBMS may use a private API and be specifically customized and linked to a single application. For example, an email system performs many of the functions of a general-purpose DBMS such as message insertion, message deletion, attachment handling, blocklist lookup, associating messages an email address and so forth however these functions are limited to what is required to handle email.
== Application ==
External interaction with the database will be via an application program that interfaces with the DBMS. This can range from a database tool that allows users to execute SQL queries textually or graphically, to a website that happens to use a database to store and search information.
=== Application program interface ===
A programmer will code interactions to the database (sometimes referred to as a datasource) via an application program interface (API) or via a database language. The particular API or language chosen will need to be supported by DBMS, possibly indirectly via a preprocessor or a bridging API. Some API's aim to be database independent, ODBC being a commonly known example. Other common API's include JDBC and ADO.NET.
== Database languages ==
Database languages are special-purpose languages, which allow one or more of the following tasks, sometimes distinguished as sublanguages:
Data control language (DCL) – controls access to data;
Data definition language (DDL) – defines data types such as creating, altering, or dropping tables and the relationships among them;
Data manipulation language (DML) – performs tasks such as inserting, updating, or deleting data occurrences;
Data query language (DQL) – allows searching for information and computing derived information.
Database languages are specific to a particular data model. Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and are supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and Db2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.
A database language may also incorporate features like:
DBMS-specific configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
== Storage ==
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Databases as digital objects contain three layers of information which must be stored: the data, the structure, and the semantics. Proper storage of all three layers is needed for future preservation and longevity of the database. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often using the operating systems' file systems as intermediates for storage layout), storage properties and configuration settings are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look at the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
=== Materialized views ===
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
=== Replication ===
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to the same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
=== Virtualization ===
With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.
== Security ==
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) are allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or using specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this in the database. Monitoring can be set up to attempt to detect security breaches. Therefore, organizations must take database security seriously because of the many benefits it provides. Organizations will be safeguarded from security breaches and hacking activities like firewall intrusion, virus spread, and ransom ware. This helps in protecting the company's essential information, which cannot be shared with outsiders at any cause.
== Transactions and concurrency ==
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring or releasing a lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: atomicity, consistency, isolation, and durability.
== Migration ==
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This is in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help import databases from other popular DBMSs.
== Building, maintaining, and tuning ==
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be used for this purpose. A DBMS provides the needed user interfaces to be used by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialized and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
== Backup and restore ==
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are used to restore that state.
== Static analysis ==
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database systems has many interesting applications, in particular, for security purposes, such as fine-grained access control, watermarking, etc.
== Miscellaneous features ==
Other DBMS features might include:
Database logs – This helps in keeping a history of the executed functions.
Graphics component for producing graphs and charts, especially in a data warehouse system.
Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as "DevOps for database".
== Design and modeling ==
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity–relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organization, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modeling notation used to express that design).
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like, which depend on the particular DBMS. This is often called physical database design, and the output is the physical data model. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
=== Models ===
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Hierarchical database model
Network model
Graph database
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object–relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
=== External, conceptual, and internal views ===
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level (or logical level) unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual and internal view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are in the interest of the human resources department. Thus different departments need different views of the company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On the one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
== Research ==
Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, related concurrency control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE).
== See also ==
== Notes ==
== References ==
== Sources ==
== Further reading ==
Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. ISBN 978-0-387-49616-0.
Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007.
Raghu Ramakrishnan and Johannes Gehrke, Database Management Systems.
Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts.
Lightstone, S.; Teorey, T.; Nadeau, T. (2007). Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more. Morgan Kaufmann Press. ISBN 978-0-12-369389-1.
Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005. ISBN 0-12-685352-5.
CMU Database courses playlist
MIT OCW 6.830 | Fall 2010 | Database Systems
Berkeley CS W186
== External links ==
DB File extension – information about files with the DB extension | Wikipedia/Database_management_systems |
In computer science, Algorithms for Recovery and Isolation Exploiting Semantics, or ARIES, is a recovery algorithm designed to work with a no-force, steal database approach; it is used by IBM Db2, Microsoft SQL Server and many other database systems. IBM Fellow Chandrasekaran Mohan is the primary inventor of the ARIES family of algorithms.
Three main principles lie behind ARIES:
Write-ahead logging: Any change to an object is first recorded in the log, and the log must be written to stable storage before changes to the object are written to disk.
Repeating history during Redo: On restart after a crash, ARIES retraces the actions of a database before the crash and brings the system back to the exact state that it was in before the crash. Then it undoes the transactions still active at crash time.
Logging changes during Undo: Changes made to the database while undoing transactions are logged to ensure such an action isn't repeated in the event of repeated restarts.
== Logging ==
The ARIES algorithm relies on logging of all database operations with ascending Sequence Numbers. Usually the resulting logfile is stored on so-called "stable storage", that is a storage medium that is assumed to survive crashes and hardware failures.
To gather the necessary information for the logs, two data structures have to be maintained: the dirty page table (DPT) and the transaction table (TT).
The dirty page table keeps record of all the pages that have been modified, and not yet written to disk, and the first Sequence Number that caused that page to become dirty. The transaction table contains all currently running transactions and the Sequence Number of the last log entry they created.
We create log records of the form (Sequence Number, Transaction ID, Page ID, Redo, Undo, Previous Sequence Number). The Redo and Undo fields keep information about the changes this log record saves and how to undo them. The Previous Sequence Number is a reference to the previous log record that was created for this transaction. In the case of an aborted transaction, it's possible to traverse the log file in reverse order using the Previous Sequence Numbers, undoing all actions taken within the specific transaction.
Every transaction implicitly begins with the first "Update" type of entry for the given Transaction ID, and is committed with "End Of Log" (EOL) entry for the transaction.
During a recovery, or while undoing the actions of an aborted transaction, a special kind of log record is written, the Compensation Log Record (CLR), to record that the action has already been undone. CLRs are of the form (Sequence Number, Transaction ID, Page ID, Redo, Previous Sequence Number, Next Undo Sequence Number). The Redo field contains application of Undo field of reverted action, and the Undo field is omitted because CLR is never reverted.
== Recovery ==
The recovery works in three phases. The first phase, Analysis, computes all the necessary information from the logfile. The Redo phase restores the database to the exact state at the crash, including all the changes of uncommitted transactions that were running at that point in time. The Undo phase then undoes all uncommitted changes, leaving the database in a consistent state.
=== Analysis ===
During the Analysis phase we restore the DPT and the TT as they were at the time of the crash.
We run through the logfile (from the beginning or the last checkpoint) and add all transactions for which we encounter Begin Transaction entries to the TT. Whenever an End Log entry is found, the corresponding transaction is removed. The last Sequence Number for each transaction is also maintained.
During the same run we also fill the dirty page table by adding a new entry whenever we encounter a page that is modified and not yet in the DPT. This however only computes a superset of all dirty pages at the time of the crash, since we don't check the actual database file whether the page was written back to the storage.
=== Redo ===
From the DPT, we can compute the minimal Sequence Number of a dirty page. From there, we have to start redoing the actions until the crash, in case they weren't persisted already.
Running through the log file, we check for each entry, whether the modified page P on the entry exists in the DPT. If it doesn't, then we do not have to worry about redoing this entry since the data persists on the disk. If page P exists in the DPT table, then we see whether the Sequence Number in the DPT is smaller than the Sequence Number of the log record (i.e. whether the change in the log is newer than the last version that was persisted). If it isn't, then we don't redo the entry since the change is already there. If it is, we fetch the page from the database storage and check the Sequence Number stored on the page to the Sequence Number on the log record. If the former is smaller than the latter, the page needs to be written to the disk. That check is necessary because the recovered DPT is only a conservative superset of the pages that really need changes to be reapplied. Lastly, when all the above checks are finished and failed, we reapply the redo action and store the new Sequence Number on the page. It is also important for recovery from a crash during the Redo phase, as the redo isn't applied twice to the same page.
=== Undo ===
After the Redo phase, the database reflects the exact state at the crash. However the changes of uncommitted transactions have to be undone to restore the database to a consistent state.
For that we run backwards through the log for each transaction in the TT (those runs can of course be combined into one) using the Previous Sequence Number fields in the records. For each record we undo the changes (using the information in the Undo field) and write a compensation log record to the log file. If we encounter a Begin Transaction record we write an End Log record for that transaction.
The compensation log records make it possible to recover during a crash that occurs during the recovery phase. That isn't as uncommon as one might think, as it is possible for the recovery phase to take quite long. CLRs are read during the Analysis phase and redone during the Redo phase.
== Checkpoints ==
To avoid re-scanning the whole logfile during the analysis phase it is advisable to save the DPT and the TT regularly to the logfile, forming a checkpoint. Instead of having to run through the whole file it is just necessary to run backwards until a checkpoint is found. From that point it is possible to restore the DPT and the TT as they were at the time of the crash by reading the logfile forward again. Then it is possible to proceed as usual with Redo and Undo.
The naive way for checkpointing involves locking the whole database to avoid changes to the DPT and the TT during the creation of the checkpoint. Fuzzy logging circumvents that by writing two log records. One Fuzzy Log Starts Here record and, after preparing the checkpoint data, the actual checkpoint. Between the two records other log records can be created. During recovery it is necessary to find both records to obtain a valid checkpoint.
== References ==
== External links ==
Impact of ARIES Family of Locking and Recovery Algorithms - C. Mohan, archived from the original on 2012-08-19, retrieved 2013-09-18 | Wikipedia/Algorithms_for_Recovery_and_Isolation_Exploiting_Semantics |
In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion concurrency control policies, and with a variety of possible methods there exist multiple unique implementations for different applications.
== Types ==
Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access.
The simplest type of lock is a binary semaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade.
Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. With a spinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process rescheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread.
Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.
Uniprocessor architectures have the option of using uninterruptible sequences of instructions—using special instructions or instruction prefixes to disable interrupts temporarily—but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues.
The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code:
The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes if atomic locking operations are not available.
Careless use of locks can result in deadlock or livelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.)
Some languages do support locks syntactically. An example in C# follows:
C# introduced System.Threading.Lock in C# 13 on .NET 9.
The code lock(this) can lead to problems if the instance can be accessed publicly.
Similar to Java, C# can also synchronize entire methods, by using the MethodImplOptions.Synchronized attribute.
== Granularity ==
Before being introduced to lock granularity, one needs to understand three concepts about locks:
lock overhead: the extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage;
lock contention: this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row);
deadlock: the situation when each of at least two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever.
There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.
An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.
In a database management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.
== Database locks ==
Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps.
There are mechanisms employed to manage the actions of multiple concurrent users on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are pessimistic locking and optimistic locking:
Pessimistic locking: a user who reads a record with the intention of updating it places an exclusive lock on the record to prevent other users from manipulating it. This means no one else can manipulate that record until the user releases the lock. The downside is that users can be locked out for a very long time, thereby slowing the overall system response and causing frustration.
Where to use pessimistic locking: this is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development.
Optimistic locking: this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial-read made by each user. When a user wants to update a record, the application determines whether another user has changed the record since it was last read. The application does this by comparing the initial-read held in memory to the database record to verify any changes made to the record. Any discrepancies between the initial-read and the database record violates concurrency rules and hence causes the system to disregard any update request. An error message is generated and the user is asked to start the update process again. It improves database performance by reducing the amount of locking required, thereby reducing the load on the database server. It works efficiently with tables that require limited updates since no users are locked out. However, some updates may fail. The downside is constant update failures due to high volumes of update requests from multiple concurrent users - it can be frustrating for users.
Where to use optimistic locking: this is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications, where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications.
== Lock compatibility table ==
Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are called incompatible; otherwise the locks are compatible. Often, lock types blocking interactions are presented in the technical literature by a Lock compatibility table. The following is an example with the common, major lock types:
✔ indicates compatibility
X indicates incompatibility, i.e., a case when a lock of the first type (in left column) on an object blocks a lock of the second type (in top row) from being acquired on the same object (by another transaction). An object typically has a queue of waiting requested (by transactions) operations with respective locks. The first blocked lock for operation in the queue is acquired as soon as the existing blocking lock is removed from the object, and then its respective operation is executed. If a lock for operation in the queue is not blocked by any existing lock (existence of multiple compatible locks on a same object is possible concurrently), it is acquired immediately.
Comment: In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no".
== Disadvantages ==
Lock-based resource protection and thread/process synchronization have many disadvantages:
Contention: some threads/processes have to wait until a lock (or a whole set of locks) is released. If one of the threads holding a lock dies, stalls, blocks, or enters an infinite loop, other threads waiting for the lock may wait indefinitely until the computer is power cycled.
Overhead: the use of locks adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a race condition.)
Debugging: bugs associated with locks are time dependent and can be very subtle and extremely hard to replicate, such as deadlocks.
Instability: the optimal balance between lock overhead and lock contention can be unique to the problem domain (application) and sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of an application and may entail tremendous changes to update (re-balance).
Composability: locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete item X from table A and insert X into table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions.
Priority inversion: a low-priority thread/process holding a common lock can prevent high-priority threads/processes from proceeding. Priority inheritance can be used to reduce priority-inversion duration. The priority ceiling protocol can be used on uniprocessor systems to minimize the worst-case priority-inversion duration, as well as prevent deadlock.
Convoying: all other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault.
Some concurrency control strategies avoid some or all of these problems. For example, a funnel or serializing tokens can avoid the biggest problem: deadlocks. Alternatives to locking include non-blocking synchronization methods, like lock-free programming techniques and transactional memory. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the application level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application.
In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.
=== Lack of composability ===
One of lock-based programming's biggest problems is that "locks don't compose": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals. Simon Peyton Jones (an advocate of software transactional memory) gives the following example of a banking application: design a class Account that allows multiple concurrent clients to deposit or withdraw money to an account, and give an algorithm to transfer money from one account to another.
The lock-based solution to the first part of the problem is:
class Account:
member balance: Integer
member mutex: Lock
method deposit(n: Integer)
mutex.lock()
balance ← balance + n
mutex.unlock()
method withdraw(n: Integer)
deposit(−n)
The second part of the problem is much more complicated. A transfer routine that is correct for sequential programs would be
function transfer(from: Account, to: Account, amount: Integer)
from.withdraw(amount)
to.deposit(amount)
In a concurrent program, this algorithm is incorrect because when one thread is halfway through transfer, another might observe a state where amount has been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by putting locks on both accounts prior to changing either one, but then the locks have to be placed according to some arbitrary, global ordering to prevent deadlock:
function transfer(from: Account, to: Account, amount: Integer)
if from < to // arbitrary ordering on the locks
from.lock()
to.lock()
else
to.lock()
from.lock()
from.withdraw(amount)
to.deposit(amount)
from.unlock()
to.unlock()
This solution gets more complicated when more locks are involved, and the transfer function needs to know about all of the locks, so they cannot be hidden.
== Language support ==
Programming languages vary in their support for synchronization:
Ada provides protected objects that have visible protected subprograms or entries as well as rendezvous.
The ISO/IEC C standard provides a standard mutual exclusion (locks) application programming interface (API) since C11. The current ISO/IEC C++ standard supports threading facilities since C++11. The OpenMP standard is supported by some compilers, and allows critical sections to be specified using pragmas. The POSIX pthread API provides lock support. Visual C++ provides the synchronize attribute of methods to be synchronized, but this is specific to COM objects in the Windows architecture and Visual C++ compiler. C and C++ can easily access any native operating system locking features.
C# provides the lock keyword on a thread to ensure its exclusive access to a resource.
Visual Basic (.NET) provides a SyncLock keyword like C#'s lock keyword.
Java provides the keyword synchronized to lock code blocks, methods or objects and libraries featuring concurrency-safe data structures.
Objective-C provides the keyword @synchronized to put locks on blocks of code and also provides the classes NSLock, NSRecursiveLock, and NSConditionLock along with the NSLocking protocol for locking as well.
PHP provides a file-based locking as well as a Mutex class in the pthreads extension.
Python provides a low-level mutex mechanism with a Lock class from the threading module.
The ISO/IEC Fortran standard (ISO/IEC 1539-1:2010) provides the lock_type derived type in the intrinsic module iso_fortran_env and the lock/unlock statements since Fortran 2008.
Ruby provides a low-level mutex object and no keyword.
Rust provides the Mutex<T> struct.
x86 assembly language provides the LOCK prefix on certain operations to guarantee their atomicity.
Haskell implements locking via a mutable data structure called an MVar, which can either be empty or contain a value, typically a reference to a resource. A thread that wants to use the resource ‘takes’ the value of the MVar, leaving it empty, and puts it back when it is finished. Attempting to take a resource from an empty MVar results in the thread blocking until the resource is available. As an alternative to locking, an implementation of software transactional memory also exists.
Go provides a low-level Mutex object in standard's library sync package. It can be used for locking code blocks, methods or objects.
== Mutexes vs. semaphores ==
== See also ==
Critical section
Double-checked locking
File locking
Lock-free and wait-free algorithms
Monitor (synchronization)
Mutual exclusion
Read/write lock pattern
== References ==
== External links ==
Tutorial on Locks and Critical Sections | Wikipedia/Locking_(computer_science) |
A differential backup is a type of data backup that preserves data, saving only the difference in the data since the last full backup. The rationale in this is that, since changes to data are generally few compared to the entire amount of data in the data repository, the amount of time required to complete the backup will be smaller than if a full backup was performed every time that the organization or data owner wishes to back up changes since the last full backup. Another advantage, at least as compared to the incremental backup method of data backup, is that at data restoration time, at most two backup media are ever needed to restore all the data. This simplifies data restores as well as increases the likelihood of shortening data restoration time.
== Meaning ==
A differential backup is a cumulative backup of all changes made since the last full backup, i.e., the differences since the last full backup. The advantage to this is the quicker recovery time, requiring only a full backup and the last differential backup to restore the entire data repository. The disadvantage is that for each day elapsed since the last full backup, more data needs to be backed up, especially if a significant proportion of the data has changed, thus increasing backup time as compared to the incremental backup method.
It is important to use the terms "differential backup" and "incremental backup" correctly. The two terms are widely used in the industry, and their use is universally standard. A differential backup refers to a backup made to include the differences since the last full backup, while an incremental backup contains only the changes since the last incremental backup. (Or, of course, since the last full backup if the incremental backup in question is the first incremental backup immediately after the last full backup.) All the major data backup vendors have standardized on these definitions.
== Illustration ==
The difference between incremental and differential backups can be illustrated as follows:
Incremental backups:
The above assumes that backups are done daily. Otherwise, the “Changes since” entry must be modified to refer to the last backup (whether such last backup was full or incremental). It also assumes a weekly rotation.
Differential backups:
It is important to remember the industry standard meaning of these two terms because, while the terms above are in very wide use, some writers have been known to reverse their meaning. For example, Oracle Corporation uses a backward description of differential backups in their DB product as of May 14, 2015:
"Differential incremental backups - In a differential level 1 backup, RMAN backs up all blocks that have changed since the most recent cumulative or differential incremental backup, whether at level 1 or level 0. RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies all blocks changed since the level 0 backup."
== See also ==
Backup rotation scheme
Continuous data protection
Delta encoding
Disk Archive - portable robust program for archiving and backup
Dump (Unix) - UNIX utility for multilevel incremental file system backups.
rsync - File synchronization algorithm and protocol.
== References ==
== Further reading == | Wikipedia/Differential_backup |
In systems analysis, a many-to-many relationship is a type of cardinality that refers to the relationship between two entities, say, A and B, where A may contain a parent instance for which there are many children in B and vice versa.
== Data relationships ==
For example, think of A as Authors, and B as Books. An Author can write several Books, and a Book can be written by several Authors. In a relational database management system, such relationships are usually implemented by means of an associative table (also known as join table, junction table or cross-reference table), say, AB with two one-to-many relationships A → AB and B → AB. In this case the logical primary key for AB is formed from the two foreign keys (i.e. copies of the primary keys of A and B).
In web application frameworks such as CakePHP and Ruby on Rails, a many-to-many relationship between entity types represented by logical model database tables is sometimes referred to as a HasAndBelongsToMany (HABTM) relationship.
== See also ==
Associative entity
One-to-one (data model)
One-to-many (data model)
== References == | Wikipedia/Many-to-many_(data_model) |
In relational database theory, a functional dependency is the following constraint between two attribute sets in a relation: Given a relation R and attribute sets X,Y
⊆
{\displaystyle \subseteq }
R, X is said to functionally determine Y (written X → Y) if each X value is associated with precisely one Y value. R is then said to satisfy the functional dependency X → Y. Equivalently, the projection
Π
X
,
Y
R
{\displaystyle \Pi _{X,Y}R}
is a function, that is, Y is a function of X. In simple words, if the values for the X attributes are known (say they are x), then the values for the Y attributes corresponding to x can be determined by looking them up in any tuple of R containing x. Customarily X is called the determinant set and Y the dependent set. A functional dependency FD: X → Y is called trivial if Y is a subset of X.
In other words, a dependency FD: X → Y means that the values of Y are determined by the values of X. Two tuples sharing the same values of X will necessarily have the same values of Y.
The determination of functional dependencies is an important part of designing databases in the relational model, and in database normalization and denormalization. A simple application of functional dependencies is Heath's theorem; it says that a relation R over an attribute set U and satisfying a functional dependency X → Y can be safely split in two relations having the lossless-join decomposition property, namely into
Π
X
Y
(
R
)
⋈
Π
X
Z
(
R
)
=
R
{\displaystyle \Pi _{XY}(R)\bowtie \Pi _{XZ}(R)=R}
where Z = U − XY are the rest of the attributes. (Unions of attribute sets are customarily denoted by their juxtapositions in database theory.) An important notion in this context is a candidate key, defined as a minimal set of attributes that functionally determine all of the attributes in a relation. The functional dependencies, along with the attribute domains, are selected so as to generate constraints that would exclude as much data inappropriate to the user domain from the system as possible.
A notion of logical implication is defined for functional dependencies in the following way: a set of functional dependencies
Σ
{\displaystyle \Sigma }
logically implies another set of dependencies
Γ
{\displaystyle \Gamma }
, if any relation R satisfying all dependencies from
Σ
{\displaystyle \Sigma }
also satisfies all dependencies from
Γ
{\displaystyle \Gamma }
; this is usually written
Σ
⊨
Γ
{\displaystyle \Sigma \models \Gamma }
. The notion of logical implication for functional dependencies admits a sound and complete finite axiomatization, known as Armstrong's axioms.
== Examples ==
=== Cars ===
Suppose one is designing a system to track vehicles and the capacity of their engines. Each vehicle has a unique vehicle identification number (VIN). One would write VIN → EngineCapacity because it would be inappropriate for a vehicle's engine to have more than one capacity. (Assuming, in this case, that vehicles only have one engine.) On the other hand, EngineCapacity → VIN is incorrect because there could be many vehicles with the same engine capacity.
This functional dependency may suggest that the attribute EngineCapacity be placed in a relation with candidate key VIN. However, that may not always be appropriate. For example, if that functional dependency occurs as a result of the transitive functional dependencies VIN → VehicleModel and VehicleModel → EngineCapacity then that would not result in a normalized relation.
=== Lectures ===
This example illustrates the concept of functional dependency. The situation modelled is that of college students visiting one or more lectures in each of which they are assigned a teaching assistant (TA). Let's further assume that every student is in some semester and is identified by a unique integer ID.
We notice that whenever two rows in this table feature the same StudentID,
they also necessarily have the same Semester values. This basic fact
can be expressed by a functional dependency:
StudentID → Semester.
If a row was added where the student had a different value of semester, then the functional dependency FD would no longer exist. This means that the FD is implied by the data as it is possible to have values that would invalidate the FD.
Other nontrivial functional dependencies can be identified, for example:
{StudentID, Lecture} → TA
{StudentID, Lecture} → {TA, Semester}
The latter expresses the fact that the set {StudentID, Lecture} is a superkey of the relation.
=== Employee department ===
A classic example of functional dependency is the employee department model.
This case represents an example where multiple functional dependencies are embedded in a single representation of data. Note that because an employee can only be a member of one department, the unique ID of that employee determines the department.
Employee ID → Employee Name
Employee ID → Department ID
In addition to this relationship, the table also has a functional dependency through a non-key attribute
Department ID → Department Name
This example demonstrates that even though there exists a FD Employee ID → Department ID - the employee ID would not be a logical key for determination of the department Name. The process of normalization of the data would recognize all FDs and allow the designer to construct tables and relationships that are more logical based on the data.
== Properties and axiomatization of functional dependencies ==
Given that X, Y, and Z are sets of attributes in a relation R, one can derive several properties of functional dependencies. Among the most important are the following, usually called Armstrong's axioms:
Reflexivity: If Y is a subset of X, then X → Y
Augmentation: If X → Y, then XZ → YZ
Transitivity: If X → Y and Y → Z, then X → Z
"Reflexivity" can be weakened to just
X
→
∅
{\displaystyle X\rightarrow \varnothing }
, i.e. it is an actual axiom, where the other two are proper inference rules, more precisely giving rise to the following rules of syntactic consequence:
⊢
X
→
∅
{\displaystyle \vdash X\rightarrow \varnothing }
X
→
Y
⊢
X
Z
→
Y
Z
{\displaystyle X\rightarrow Y\vdash XZ\rightarrow YZ}
X
→
Y
,
Y
→
Z
⊢
X
→
Z
{\displaystyle X\rightarrow Y,Y\rightarrow Z\vdash X\rightarrow Z}
.
These three rules are a sound and complete axiomatization of functional dependencies. This axiomatization is sometimes described as finite because the number of inference rules is finite, with the caveat that the axiom and rules of inference are all schemata, meaning that the X, Y and Z range over all ground terms (attribute sets).
By applying augmentation and transitivity, one can derive two additional rules:
Pseudotransitivity: If X → Y and YW → Z, then XW → Z
Composition: If X → Y and Z → W, then XZ → YW
One can also derive the union and decomposition rules from Armstrong's axioms:
X → Y and X → Z if and only if X → YZ
== Closure ==
=== Closure of functional dependency ===
The closure of a set of values is the set of attributes that can be determined using its functional dependencies for a given relationship. One uses Armstrong's axioms to provide a proof - i.e. reflexivity, augmentation, transitivity.
Given
R
{\displaystyle R}
and
F
{\displaystyle F}
a set of FDs that holds in
R
{\displaystyle R}
:
The closure of
F
{\displaystyle F}
in
R
{\displaystyle R}
(denoted
F
{\displaystyle F}
+) is the set of all FDs that are logically implied by
F
{\displaystyle F}
.
=== Closure of a set of attributes ===
Closure of a set of attributes X with respect to
F
{\displaystyle F}
is the set X+ of all attributes that are functionally determined by X using
F
{\displaystyle F}
+.
==== Example ====
Imagine the following list of FDs. We are going to calculate a closure for A (written as A+) from this relationship.
A → B
B → C
AB → D
The closure would be as follows:
Therefore, A+= ABCD. Because A+ includes every attribute in the relationship, it is a superkey.
== Covers and equivalence ==
=== Covers ===
Definition:
F
{\displaystyle F}
covers
G
{\displaystyle G}
if every FD in
G
{\displaystyle G}
can be inferred from
F
{\displaystyle F}
.
F
{\displaystyle F}
covers
G
{\displaystyle G}
if
G
{\displaystyle G}
+ ⊆
F
{\displaystyle F}
+
Every set of functional dependencies has a canonical cover.
=== Equivalence of two sets of FDs ===
Two sets of FDs
F
{\displaystyle F}
and
G
{\displaystyle G}
over schema
R
{\displaystyle R}
are equivalent, written
F
{\displaystyle F}
≡
G
{\displaystyle G}
, if
F
{\displaystyle F}
+ =
G
{\displaystyle G}
+. If
F
{\displaystyle F}
≡
G
{\displaystyle G}
, then
F
{\displaystyle F}
is a cover for
G
{\displaystyle G}
and vice versa. In other words, equivalent sets of functional dependencies are called covers of each other.
=== Non-redundant covers ===
A set
F
{\displaystyle F}
of FDs is nonredundant if there is no proper subset
F
′
{\displaystyle F'}
of
F
{\displaystyle F}
with
F
′
{\displaystyle F'}
≡
F
{\displaystyle F}
. If such an
F
′
{\displaystyle F'}
exists,
F
{\displaystyle F}
is redundant.
F
{\displaystyle F}
is a nonredundant cover for
G
{\displaystyle G}
if
F
{\displaystyle F}
is a cover for
G
{\displaystyle G}
and
F
{\displaystyle F}
is nonredundant.
An alternative characterization of nonredundancy is that
F
{\displaystyle F}
is nonredundant if there is no FD X → Y in
F
{\displaystyle F}
such that
F
{\displaystyle F}
- {X → Y}
⊨
{\displaystyle \models }
X → Y. Call an FD X → Y in
F
{\displaystyle F}
redundant in
F
{\displaystyle F}
if
F
{\displaystyle F}
- {X → Y}
⊨
{\displaystyle \models }
X → Y.
== Applications to normalization ==
=== Heath's theorem ===
An important property (yielding an immediate application) of functional dependencies is that if R is a relation with columns named from some set of attributes U and R satisfies some functional dependency X → Y then
R
=
Π
X
Y
(
R
)
⋈
Π
X
Z
(
R
)
{\displaystyle R=\Pi _{XY}(R)\bowtie \Pi _{XZ}(R)}
where Z = U − XY. Intuitively, if a functional dependency X → Y holds in R, then the relation can be safely split in two relations alongside the column X (which is a key for
Π
X
Y
(
R
)
⋈
Π
X
Z
(
R
)
{\displaystyle \Pi _{XY}(R)\bowtie \Pi _{XZ}(R)}
) ensuring that when the two parts are joined back no data is lost, i.e. a functional dependency provides a simple way to construct a lossless join decomposition of R in two smaller relations. This fact is sometimes called Heaths theorem; it is one of the early results in database theory.
Heath's theorem effectively says we can pull out the values of Y from the big relation R and store them into one,
Π
X
Y
(
R
)
{\displaystyle \Pi _{XY}(R)}
, which has no value repetitions in the row for X and is effectively a lookup table for Y keyed by X and consequently has only one place to update the Y corresponding to each X unlike the "big" relation R where there are potentially many copies of each X, each one with its copy of Y which need to be kept synchronized on updates. (This elimination of redundancy is an advantage in OLTP contexts, where many changes are expected, but not so much in OLAP contexts, which involve mostly queries.) Heath's decomposition leaves only X to act as a foreign key in the remainder of the big table
Π
X
Z
(
R
)
{\displaystyle \Pi _{XZ}(R)}
.
Functional dependencies however should not be confused with inclusion dependencies, which are the formalism for foreign keys; even though they are used for normalization, functional dependencies express constraints over one relation (schema), whereas inclusion dependencies express constraints between relation schemas in a database schema. Furthermore, the two notions do not even intersect in the classification of dependencies: functional dependencies are equality-generating dependencies whereas inclusion dependencies are tuple-generating dependencies. Enforcing referential constraints after relation schema decomposition (normalization) requires a new formalism, i.e. inclusion dependencies. In the decomposition resulting from Heath's theorem, there is nothing preventing the insertion of tuples in
Π
X
Z
(
R
)
{\displaystyle \Pi _{XZ}(R)}
having some value of X not found in
Π
X
Y
(
R
)
{\displaystyle \Pi _{XY}(R)}
.
=== Normal forms ===
Normal forms are database normalization levels which determine the "goodness" of a table. Generally, the third normal form is considered to be a "good" standard for a relational database.
Normalization aims to free the database from update, insertion and deletion anomalies. It also ensures that when a new value is introduced into the relation, it has minimal effect on the database, and thus minimal effect on the applications using the database.
== Irreducible function depending set ==
A set S of functional dependencies is irreducible if the set has the following three properties:
Each right set of a functional dependency of S contains only one attribute.
Each left set of a functional dependency of S is irreducible. It means that reducing any one attribute from left set will change the content of S (S will lose some information).
Reducing any functional dependency will change the content of S.
Sets of functional dependencies with these properties are also called canonical or minimal. Finding such a set S of functional dependencies which is equivalent to some input set S' provided as input is called finding a minimal cover of S': this problem can be solved in polynomial time.
== See also ==
Chase (algorithm)
Inclusion dependency
Join dependency
Multivalued dependency (MVD)
Database normalization
First normal form
== References ==
== Further reading ==
Codd, E. F. (1972). "Further Normalization of the Data Base Relational Model" (PDF). ACM Transactions on Database Systems. San Jose, California: Association for Computing Machinery.
== External links ==
Gary Burt (Summer 1999). "CS 461 (Database Management Systems) lecture notes". University of Maryland Baltimore County Department of Computer Science and Electrical Engineering.
Jeffrey D. Ullman. "CS345 Lecture Notes" (PostScript). Stanford University.
Osmar Zaiane (June 9, 1998). "Chapter 6: Integrity constraints". CMPT 354 (Database Systems I) lecture notes. Simon Fraser University Department of Computing Science. | Wikipedia/Functional_dependency |
In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
Before digital storage and retrieval of data have become widespread, index cards were used for data storage in a wide range of applications and environments: in the home to record and store recipes, shopping lists, contact information and other organizational data; in business to record presentation notes, project research and notes, and contact information; in schools as flash cards or other visual aids; and in academic research to hold data such as bibliographical citations or notes in a card file. Professional book indexers used index cards in the creation of book indexes until they were replaced by indexing software in the 1980s and 1990s.
Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spans formal techniques and practical considerations, including data modeling, efficient data representation and storage, query languages, security and privacy of sensitive data, and distributed computing issues, including supporting concurrent access and fault tolerance.
Computer scientists may classify database management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, collectively referred to as NoSQL, because they use different query languages.
== Terminology and overview ==
Formally, a "database" refers to a set of related data accessed through the use of a "database management system" (DBMS), which is an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that detail how the data is to be organized.
Update – Insertion, modification, and deletion of the data itself.
Retrieval – Selecting data according to specified criteria (e.g., a query, a position in a hierarchy, or a position in relation to other data) and providing that data either directly to the user, or making it available for further processing by the database itself or by other applications. The retrieved data may be made available in a more or less direct form without modification, as it is stored in the database, or in a new form obtained by altering it or combining it with existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large-volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
== History ==
The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid-1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2018 they remain dominant: IBM Db2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardized SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object–relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object–relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key–value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
=== 1960s, navigational DBMS ===
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However, CODASYL databases were complex and required significant training and effort to produce useful applications.
IBM also had its own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman's 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems' TOTAL databases are classified as network databases. IMS remains in use as of 2014.
=== 1970s, relational DBMS ===
Edgar F. Codd worked at IBM in San Jose, California, in an office primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
The paper described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organize the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated.
Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based.
The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit.
In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic.
Codd's paper inspired teams at various universities to research the subject, including one at University of California, Berkeley led by Eugene Wong and Michael Stonebraker, who started INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. The university in 1974 hosted a debate between Codd and Bachman which Bruce Lindsay of IBM later described as "throwing lightning bolts at each other!". MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998.
=== Integrated approach ===
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at a lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However, this idea is still pursued in certain applications by some companies like Netezza and Oracle (Exadata).
=== Late 1970s, SQL DBMS ===
IBM formed a team led by Codd that started working on a prototype system, System R despite opposition from others at the company. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (IBM Db2).
Larry Ellison's Oracle Database (or more simply, Oracle) started from a different chain, based on IBM's papers on System R. Though Oracle V1 implementations were completed in 1978, it was not until Oracle Version 2 when Ellison beat IBM to market in 1979.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed in the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two has become irrelevant.
=== 1980s, on the desktop ===
Besides IBM and various software companies such as Sybase and Informix Corporation, most large computer hardware vendors by the 1980s had their own database systems such as DEC's VAX Rdb/VMS. The decade ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
=== 1990s, object-oriented ===
By the start of the decade databases had become a billion-dollar industry in about ten years. The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be related to objects and their attributes and not to individual fields. The term "object–relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object–relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object–relational mappings (ORMs) attempt to solve the same problem.
=== 2000s, NoSQL and NewSQL ===
Database sales grew rapidly during the dotcom bubble and, after its end, the rise of ecommerce. The popularity of open source databases such as MySQL has grown since 2000, to the extent that Ken Jacobs of Oracle said in 2005 that perhaps "these guys are doing to us what we did to IBM".
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally.
In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem, it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system.
== Use cases ==
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
== Classification ==
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and used by end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application's end-users and requires little or no ongoing maintenance.
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym for federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows modeling, storage, and retrieval of (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or Δ) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business's customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared-nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?".
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
== Database management system ==
Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database." Examples of DBMS's include MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle Database, and Microsoft Access.
The DBMS acronym is sometimes extended to indicate the underlying database model, with RDBMS for the relational, OODBMS for the object (oriented) and ORDBMS for the object–relational model. Other extensions can indicate some other characteristics, such as DDBMS for a distributed database management systems.
The functionality provided by a DBMS can vary enormously. The core functionality is the storage, retrieval and update of data. Codd proposed the following functions and services a fully-fledged general purpose DBMS should provide:
Data storage, retrieval and update
User accessible catalog or data dictionary describing the metadata
Support for transactions and concurrency
Facilities for recovering the database should it become damaged
Support for authorization of access and update of data
Access support from remote locations
Enforcing constraints to ensure data in the database abides by certain rules
It is also generally to be expected the DBMS will provide a set of utilities for such purposes as may be necessary to administer the database effectively, including import, export, monitoring, defragmentation and analysis utilities. The core part of the DBMS interacting between the database and the application interface sometimes referred to as the database engine.
Often DBMSs will have configuration parameters that can be statically and dynamically tuned, for example the maximum amount of main memory on a server the database can use. The trend is to minimize the amount of manual configuration, and for cases such as embedded databases the need to target zero-administration is paramount.
The large major enterprise DBMSs have tended to increase in size and functionality and have involved up to thousands of human years of development effort throughout their lifetime.
Early multi-user DBMS typically only allowed for the application to reside on the same computer with access via terminals or terminal emulation software. The client–server architecture was a development where the application resided on a client desktop and the database on a server allowing the processing to be distributed. This evolved into a multitier architecture incorporating application servers and web servers with the end user interface via a web browser with the database only directly connected to the adjacent tier.
A general-purpose DBMS will provide public application programming interfaces (API) and optionally a processor for database languages such as SQL to allow applications to be written to interact with and manipulate the database. A special purpose DBMS may use a private API and be specifically customized and linked to a single application. For example, an email system performs many of the functions of a general-purpose DBMS such as message insertion, message deletion, attachment handling, blocklist lookup, associating messages an email address and so forth however these functions are limited to what is required to handle email.
== Application ==
External interaction with the database will be via an application program that interfaces with the DBMS. This can range from a database tool that allows users to execute SQL queries textually or graphically, to a website that happens to use a database to store and search information.
=== Application program interface ===
A programmer will code interactions to the database (sometimes referred to as a datasource) via an application program interface (API) or via a database language. The particular API or language chosen will need to be supported by DBMS, possibly indirectly via a preprocessor or a bridging API. Some API's aim to be database independent, ODBC being a commonly known example. Other common API's include JDBC and ADO.NET.
== Database languages ==
Database languages are special-purpose languages, which allow one or more of the following tasks, sometimes distinguished as sublanguages:
Data control language (DCL) – controls access to data;
Data definition language (DDL) – defines data types such as creating, altering, or dropping tables and the relationships among them;
Data manipulation language (DML) – performs tasks such as inserting, updating, or deleting data occurrences;
Data query language (DQL) – allows searching for information and computing derived information.
Database languages are specific to a particular data model. Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and are supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and Db2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.
A database language may also incorporate features like:
DBMS-specific configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
== Storage ==
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Databases as digital objects contain three layers of information which must be stored: the data, the structure, and the semantics. Proper storage of all three layers is needed for future preservation and longevity of the database. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often using the operating systems' file systems as intermediates for storage layout), storage properties and configuration settings are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look at the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
=== Materialized views ===
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
=== Replication ===
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to the same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
=== Virtualization ===
With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.
== Security ==
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) are allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or using specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this in the database. Monitoring can be set up to attempt to detect security breaches. Therefore, organizations must take database security seriously because of the many benefits it provides. Organizations will be safeguarded from security breaches and hacking activities like firewall intrusion, virus spread, and ransom ware. This helps in protecting the company's essential information, which cannot be shared with outsiders at any cause.
== Transactions and concurrency ==
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring or releasing a lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: atomicity, consistency, isolation, and durability.
== Migration ==
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This is in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help import databases from other popular DBMSs.
== Building, maintaining, and tuning ==
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be used for this purpose. A DBMS provides the needed user interfaces to be used by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialized and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
== Backup and restore ==
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are used to restore that state.
== Static analysis ==
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database systems has many interesting applications, in particular, for security purposes, such as fine-grained access control, watermarking, etc.
== Miscellaneous features ==
Other DBMS features might include:
Database logs – This helps in keeping a history of the executed functions.
Graphics component for producing graphs and charts, especially in a data warehouse system.
Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as "DevOps for database".
== Design and modeling ==
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity–relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organization, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modeling notation used to express that design).
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like, which depend on the particular DBMS. This is often called physical database design, and the output is the physical data model. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
=== Models ===
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Hierarchical database model
Network model
Graph database
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object–relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
=== External, conceptual, and internal views ===
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level (or logical level) unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual and internal view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are in the interest of the human resources department. Thus different departments need different views of the company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On the one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
== Research ==
Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, related concurrency control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE).
== See also ==
== Notes ==
== References ==
== Sources ==
== Further reading ==
Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. ISBN 978-0-387-49616-0.
Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007.
Raghu Ramakrishnan and Johannes Gehrke, Database Management Systems.
Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts.
Lightstone, S.; Teorey, T.; Nadeau, T. (2007). Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more. Morgan Kaufmann Press. ISBN 978-0-12-369389-1.
Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005. ISBN 0-12-685352-5.
CMU Database courses playlist
MIT OCW 6.830 | Fall 2010 | Database Systems
Berkeley CS W186
== External links ==
DB File extension – information about files with the DB extension | Wikipedia/Database_systems |
Agile software development is an umbrella term for approaches to developing software that reflect the values and principles agreed upon by The Agile Alliance, a group of 17 software practitioners, in 2001. As documented in their Manifesto for Agile Software Development the practitioners value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
The practitioners cite inspiration from new practices at the time including extreme programming, scrum, dynamic systems development method, adaptive software development and being sympathetic to the need for an alternative to documentation driven, heavyweight software development processes.
Many software development practices emerged from the agile mindset. These agile-based practices, sometimes called Agile (with a capital A) include requirements, discovery and solutions improvement through the collaborative effort of self-organizing and cross-functional teams with their customer(s)/end user(s).
While there is much anecdotal evidence that the agile mindset and agile-based practices improve the software development process, the empirical evidence is limited and less than conclusive.
== History ==
Iterative and incremental software development methods can be traced back as early as 1957, with evolutionary project management and adaptive software development emerging in the early 1970s.
During the 1990s, a number of lightweight software development methods evolved in reaction to the prevailing heavyweight methods (often referred to collectively as waterfall) that critics described as overly regulated, planned, and micromanaged. These lightweight methods included: rapid application development (RAD), from 1991; the unified process (UP) and dynamic systems development method (DSDM), both from 1994; Scrum, from 1995; Crystal Clear and extreme programming (XP), both from 1996; and feature-driven development (FDD), from 1997. Although these all originated before the publication of the Agile Manifesto, they are now collectively referred to as agile software development methods.
Already since 1991 similar changes had been underway in manufacturing and management thinking derived from Lean management.
In 2001, seventeen software developers met at a resort in Snowbird, Utah to discuss lightweight development methods. They were: Kent Beck (Extreme Programming), Ward Cunningham (Extreme Programming), Dave Thomas (Pragmatic Programming, Ruby), Jeff Sutherland (Scrum), Ken Schwaber (Scrum), Jim Highsmith (Adaptive Software Development), Alistair Cockburn (Crystal), Robert C. Martin (SOLID), Mike Beedle (Scrum), Arie van Bennekum, Martin Fowler (OOAD and UML), James Grenning, Andrew Hunt (Pragmatic Programming, Ruby), Ron Jeffries (Extreme Programming), Jon Kern, Brian Marick (Ruby, Test-driven development), and Steve Mellor (OOA). The group, The Agile Alliance, published the Manifesto for Agile Software Development.
In 2005, a group headed by Cockburn and Highsmith wrote an addendum of project management principles, the PM Declaration of Interdependence, to guide software project management according to agile software development methods.
In 2009, a group working with Martin wrote an extension of software development principles, the Software Craftsmanship Manifesto, to guide agile software development according to professional conduct and mastery.
In 2011, the Agile Alliance created the Guide to Agile Practices (renamed the Agile Glossary in 2016), an evolving open-source compendium of the working definitions of agile practices, terms, and elements, along with interpretations and experience guidelines from the worldwide community of agile practitioners.
== Values and principles ==
=== Values ===
The agile manifesto reads:
We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
Scott Ambler explained:
Tools and processes are important, but it is more important to have competent people working together effectively.
Good documentation is useful in helping people to understand how the software is built and how to use it, but the main point of development is to create software, not documentation.
A contract is important but is not a substitute for working closely with customers to discover what they need.
A project plan is important, but it must not be too rigid to accommodate changes in technology or the environment, stakeholders' priorities, and people's understanding of the problem and its solution.
Introducing the manifesto on behalf of the Agile Alliance, Jim Highsmith said,
The Agile movement is not anti-methodology, in fact many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of never-maintained and rarely-used tomes. We plan, but recognize the limits of planning in a turbulent environment. Those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as "hackers" are ignorant of both the methodologies and the original definition of the term hacker.
=== Principles ===
The values are based on these principles:
Customer satisfaction by early and continuous delivery of valuable software.
Welcome changing requirements, even in late development.
Deliver working software frequently (weeks rather than months).
Close, daily cooperation between business people and developers.
Projects are built around motivated individuals, who should be trusted.
Face-to-face conversation is the best form of communication (co-location).
Working software is the primary measure of progress.
Sustainable development, able to maintain a constant pace.
Continuous attention to technical excellence and good design.
Simplicity—the art of maximizing the amount of work not done—is essential.
Best architectures, requirements, and designs emerge from self-organizing teams.
Regularly, the team reflects on how to become more effective, and adjusts accordingly.
== Overview ==
=== Iterative, incremental, and evolutionary ===
Most agile development methods break product development work into small increments that minimize the amount of up-front planning and design. Iterations, or sprints, are short time frames (timeboxes) that typically last from one to four weeks.: 20 Each iteration involves a cross-functional team working in all functions: planning, analysis, design, coding, unit testing, and acceptance testing. At the end of the iteration a working product is demonstrated to stakeholders. This minimizes overall risk and allows the product to adapt to changes quickly. An iteration might not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Through incremental development, products have room to "fail often and early" throughout each iterative phase instead of drastically on a final release date. Multiple iterations might be required to release a product or new features. Working software is the primary measure of progress.
A key advantage of agile approaches is speed to market and risk mitigation. Smaller increments are typically released to market, reducing the time and cost risks of engineering a product that doesn't meet user requirements.
=== Efficient and face-to-face communication ===
The 6th principle of the agile manifesto for software development states "The most efficient and effective method of conveying information to and within a development team is face-to-face conversation". The manifesto, written in 2001 when video conferencing was not widely used, states this in relation to the communication of information, not necessarily that a team should be co-located.
The principle of co-location is that co-workers on the same team should be situated together to better establish the identity as a team and to improve communication. This enables face-to-face interaction, ideally in front of a whiteboard, that reduces the cycle time typically taken when questions and answers are mediated through phone, persistent chat, wiki, or email. With the widespread adoption of remote working during the COVID-19 pandemic and changes to tooling, more studies have been conducted around co-location and distributed working which show that co-location is increasingly less relevant.
No matter which development method is followed, every team should include a customer representative (known as product owner in Scrum). This representative is agreed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer questions throughout the iteration. At the end of each iteration, the project stakeholders together with the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment (ROI) and ensuring alignment with customer needs and company goals. The importance of stakeholder satisfaction, detailed by frequent interaction and review at the end of each phase, is why the approach is often denoted as a customer-centered methodology.
==== Information radiator ====
In agile software development, an information radiator is a (normally large) physical display, board with sticky notes or similar, located prominently near the development team, where passers-by can see it. It presents an up-to-date summary of the product development status. A build light indicator may also be used to inform a team about the current status of their product development.
=== Very short feedback loop and adaptation cycle ===
A common characteristic in agile software development is the daily stand-up (known as daily scrum in the Scrum framework). In a brief session (e.g., 15 minutes), team members review collectively how they are progressing toward their goal and agree whether they need to adapt their approach. To keep to the agreed time limit, teams often use simple coded questions (such as what they completed the previous day, what they aim to complete that day, and whether there are any impediments or risks to progress), and delay detailed discussions and problem resolution until after the stand-up.
=== Quality focus ===
Specific tools and techniques, such as continuous integration, automated unit testing, pair programming, test-driven development, design patterns, behavior-driven development, domain-driven design, code refactoring and other techniques are often used to improve quality and enhance product development agility. This is predicated on designing and building quality in from the beginning and being able to demonstrate software for customers at any point, or at least at the end of every iteration.
== Philosophy ==
Compared to traditional software engineering, agile software development mainly targets complex systems and product development with dynamic, indeterministic and non-linear properties. Accurate estimates, stable plans, and predictions are often hard to get in early stages, and confidence in them is likely to be low. Agile practitioners use their free will to reduce the "leap of faith" that is needed before any evidence of value can be obtained. Requirements and design are held to be emergent. Big up-front specifications would probably cause a lot of waste in such cases, i.e., are not economically sound. These basic arguments and previous industry experiences, learned from years of successes and failures, have helped shape agile development's favor of adaptive, iterative and evolutionary development.
=== Adaptive vs. predictive ===
Development methods exist on a continuum from adaptive to predictive. Agile software development methods lie on the adaptive side of this continuum. One key of adaptive development methods is a rolling wave approach to schedule planning, which identifies milestones but leaves flexibility in the path to reach them, and also allows for the milestones themselves to change.
Adaptive methods focus on adapting quickly to changing realities. When the needs of a project change, an adaptive team changes as well. An adaptive team has difficulty describing exactly what will happen in the future. The further away a date is, the more vague an adaptive method is about what will happen on that date. An adaptive team cannot report exactly what tasks they will do next week, but only which features they plan for next month. When asked about a release six months from now, an adaptive team might be able to report only the mission statement for the release, or a statement of expected value vs. cost.
Predictive methods, in contrast, focus on analyzing and planning the future in detail and cater for known risks. In the extremes, a predictive team can report exactly what features and tasks are planned for the entire length of the development process. Predictive methods rely on effective early phase analysis, and if this goes very wrong, the project may have difficulty changing direction. Predictive teams often institute a change control board to ensure they consider only the most valuable changes.
Risk analysis can be used to choose between adaptive (agile or value-driven) and predictive (plan-driven) methods. Barry Boehm and Richard Turner suggest that each side of the continuum has its own home ground, as follows:
=== Agile vs. waterfall ===
One of the differences between agile software development methods and waterfall is the approach to quality and testing. In the waterfall model, work moves through software development life cycle (SDLC) phases—with one phase being completed before another can start—hence the testing phase is separate and follows a build phase. In agile software development, however, testing is completed in the same iteration as programming.
Because testing is done in every iteration—which develops a small piece of the software—users can frequently use those new pieces of software and validate the value. After the users know the real value of the updated piece of software, they can make better decisions about the software's future. Having a value retrospective and software re-planning session in each iteration—Scrum typically has iterations of just two weeks—helps the team continuously adapt its plans so as to maximize the value it delivers. This follows a pattern similar to the plan-do-check-act (PDCA) cycle, as the work is planned, done, checked (in the review and retrospective), and any changes agreed are acted upon.
This iterative approach supports a product rather than a project mindset. This provides greater flexibility throughout the development process; whereas on projects the requirements are defined and locked down from the very beginning, making it difficult to change them later. Iterative product development allows the software to evolve in response to changes in business environment or market requirements.
=== Code vs. documentation ===
In a letter to IEEE Computer, Steven Rakitin expressed cynicism about agile software development, calling it "yet another attempt to undermine the discipline of software engineering" and translating "working software over comprehensive documentation" as "we want to spend all our time coding. Remember, real programmers don't write documentation."
This is disputed by proponents of agile software development, who state that developers should write documentation if that is the best way to achieve the relevant goals, but that there are often better ways to achieve those goals than writing static documentation.
Scott Ambler states that documentation should be "just barely good enough" (JBGE), that too much or comprehensive documentation would usually cause waste, and developers rarely trust detailed documentation because it's usually out of sync with code, while too little documentation may also cause problems for maintenance, communication, learning and knowledge sharing. Alistair Cockburn wrote of the Crystal Clear method:
Crystal considers development a series of co-operative games, and intends that the documentation is enough to help the next win at the next game. The work products for Crystal include use cases, risk list, iteration plan, core domain models, and design notes to inform on choices...however there are no templates for these documents and descriptions are necessarily vague, but the objective is clear, just enough documentation for the next game. I always tend to characterize this to my team as: what would you want to know if you joined the team tomorrow.
== Methods ==
Agile software development methods support a broad range of the software development life cycle. Some methods focus on the practices (e.g., XP, pragmatic programming, agile modeling), while some focus on managing the flow of work (e.g., Scrum, Kanban). Some support activities for requirements specification and development (e.g., FDD), while some seek to cover the full development life cycle (e.g., DSDM, RUP).
Notable agile software development frameworks include:
=== Agile software development practices ===
Agile software development is supported by a number of concrete practices, covering areas like requirements, design, modeling, coding, testing, planning, risk management, process, quality, etc. Some notable agile software development practices include:
==== Acceptance test-driven development ====
==== Agile modeling ====
==== Agile testing ====
==== Backlogs ====
==== Behavior-driven development ====
==== Continuous integration ====
==== Cross-functional team ====
==== Daily stand-up ====
=== Method tailoring ===
In the literature, different terms refer to the notion of method adaptation, including 'method tailoring', 'method fragment adaptation' and 'situational method engineering'. Method tailoring is defined as:
A process or capability in which human agents determine a system development approach for a specific project situation through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments.
Situation-appropriateness should be considered as a distinguishing characteristic between agile methods and more plan-driven software development methods, with agile methods allowing product development teams to adapt working practices according to the needs of individual products. Potentially, most agile methods could be suitable for method tailoring, such as DSDM tailored in a CMM context. and XP tailored with the Rule Description Practices (RDP) technique. Not all agile proponents agree, however, with Schwaber noting "that is how we got into trouble in the first place, thinking that the problem was not having a perfect methodology. Efforts [should] center on the changes [needed] in the enterprise". Bas Vodde reinforced this viewpoint, suggesting that unlike traditional, large methodologies that require you to pick and choose elements, Scrum provides the basics on top of which you add additional elements to localize and contextualize its use. Practitioners seldom use system development methods, or agile methods specifically, by the book, often choosing to omit or tailor some of the practices of a method in order to create an in-house method.
In practice, methods can be tailored using various tools. Generic process modeling languages such as Unified Modeling Language can be used to tailor software development methods. However, dedicated tools for method engineering such as the Essence Theory of Software Engineering of SEMAT also exist.
=== Large-scale, offshore and distributed ===
Agile software development has been widely seen as highly suited to certain types of environments, including small teams of experts working on greenfield projects, and the challenges and limitations encountered in the adoption of agile software development methods in a large organization with legacy infrastructure are well-documented and understood.
In response, a range of strategies and patterns has evolved for overcoming challenges with large-scale development efforts (>20 developers) or distributed (non-colocated) development teams, amongst other challenges; and there are now several recognized frameworks that seek to mitigate or avoid these challenges.
There are many conflicting viewpoints on whether all of these are effective or indeed fit the definition of agile development, and this remains an active and ongoing area of research.
When agile software development is applied in a distributed setting (with teams dispersed across multiple business locations), it is commonly referred to as distributed agile software development. The goal is to leverage the unique benefits offered by each approach. Distributed development allows organizations to build software by strategically setting up teams in different parts of the globe, virtually building software round-the-clock (more commonly referred to as follow-the-sun model). On the other hand, agile development provides increased transparency, continuous feedback, and more flexibility when responding to changes.
=== Regulated domains ===
Agile software development methods were initially seen as best suitable for non-critical product developments, thereby excluded from use in regulated domains such as medical devices, pharmaceutical, financial, nuclear systems, automotive, and avionics sectors, etc. However, in the last several years, there have been several initiatives for the adaptation of agile methods for these domains.
There are numerous standards that may apply in regulated domains, including ISO 26262, ISO 9000, ISO 9001, and ISO/IEC 15504.
A number of key concerns are of particular importance in regulated domains:
Quality assurance (QA): Systematic and inherent quality management underpinning a controlled professional process and reliability and correctness of product.
Safety and security: Formal planning and risk management to mitigate safety risks for users and securely protecting users from unintentional and malicious misuse.
Traceability: Documentation providing auditable evidence of regulatory compliance and facilitating traceability and investigation of problems.
Verification and validation (V&V): Embedded throughout the software development process (e.g. user requirements specification, functional specification, design specification, code review, unit tests, integration tests, system tests).
== Experience and adoption ==
Although agile software development methods can be used with any programming paradigm or language in practice, they were originally closely associated with object-oriented environments such as Smalltalk, Lisp and later Java, C#. The initial adopters of agile methods were usually small to medium-sized teams working on unprecedented systems with requirements that were difficult to finalize and likely to change as the system was being developed. This section describes common problems that organizations encounter when they try to adopt agile software development methods as well as various techniques to measure the quality and performance of agile teams.
=== Measuring agility ===
==== Internal assessments ====
The Agility measurement index, amongst others, rates developments against five dimensions of product development (duration, risk, novelty, effort, and interaction). Other techniques are based on measurable goals and one study suggests that velocity can be used as a metric of agility. There are also agile self-assessments to determine whether a team is using agile software development practices (Nokia test, Karlskrona test, 42 points test).
==== Public surveys ====
One of the early studies reporting gains in quality, productivity, and business satisfaction by using agile software developments methods was a survey conducted by Shine Technologies from November 2002 to January 2003.
A similar survey, the State of Agile, is conducted every year starting in 2006 with thousands of participants from around the software development community. This tracks trends on the perceived benefits of agility, lessons learned, and good practices. Each survey has reported increasing numbers saying that agile software development helps them deliver software faster; improves their ability to manage changing customer priorities; and increases their productivity. Surveys have also consistently shown better results with agile product development methods compared to classical project management. In balance, there are reports that some feel that agile development methods are still too young to enable extensive academic research of their success.
=== Common agile software development pitfalls ===
Organizations and teams implementing agile software development often face difficulties transitioning from more traditional methods such as waterfall development, such as teams having an agile process forced on them. These are often termed agile anti-patterns or more commonly agile smells. Below are some common examples:
==== Lack of overall product design ====
A goal of agile software development is to focus more on producing working software and less on documentation. This is in contrast to waterfall models where the process is often highly controlled and minor changes to the system require significant revision of supporting documentation. However, this does not justify completely doing without any analysis or design at all. Failure to pay attention to design can cause a team to proceed rapidly at first, but then to require significant rework as they attempt to scale up the system. One of the key features of agile software development is that it is iterative. When done correctly, agile software development allows the design to emerge as the system is developed and helps the team discover commonalities and opportunities for re-use.
==== Adding stories to an iteration in progress ====
In agile software development, stories (similar to use case descriptions) are typically used to define requirements and an iteration is a short period of time during which the team commits to specific goals. Adding stories to an iteration in progress is detrimental to a good flow of work. These should be added to the product backlog and prioritized for a subsequent iteration or in rare cases the iteration could be cancelled.
This does not mean that a story cannot expand. Teams must deal with new information, which may produce additional tasks for a story. If the new information prevents the story from being completed during the iteration, then it should be carried over to a subsequent iteration. However, it should be prioritized against all remaining stories, as the new information may have changed the story's original priority.
==== Lack of sponsor support ====
Agile software development is often implemented as a grassroots effort in organizations by software development teams trying to optimize their development processes and ensure consistency in the software development life cycle. By not having sponsor support, teams may face difficulties and resistance from business partners, other development teams and management. Additionally, they may suffer without appropriate funding and resources. This increases the likelihood of failure.
==== Insufficient training ====
A survey performed by VersionOne found respondents cited insufficient training as the most significant cause for failed agile implementations Teams have fallen into the trap of assuming the reduced processes of agile software development compared to other approaches such as waterfall means that there are no actual rules for agile software development.
==== Product owner role is not properly filled ====
The product owner is responsible for representing the business in the development activity and is often the most demanding role.
A common mistake is to fill the product owner role with someone from the development team. This requires the team to make its own decisions on prioritization without real feedback from the business. They try to solve business issues internally or delay work as they reach outside the team for direction. This often leads to distraction and a breakdown in collaboration.
==== Teams are not focused ====
Agile software development requires teams to meet product commitments, which means they should focus on work for only that product. However, team members who appear to have spare capacity are often expected to take on other work, which makes it difficult for them to help complete the work to which their team had committed.
==== Excessive preparation/planning ====
Teams may fall into the trap of spending too much time preparing or planning. This is a common trap for teams less familiar with agile software development where the teams feel obliged to have a complete understanding and specification of all stories. Teams should be prepared to move forward with only those stories in which they have confidence, then during the iteration continue to discover and prepare work for subsequent iterations (often referred to as backlog refinement or grooming).
==== Problem-solving in the daily standup ====
A daily standup should be a focused, timely meeting where all team members disseminate information. If problem-solving occurs, it often can involve only certain team members and potentially is not the best use of the entire team's time. If during the daily standup the team starts diving into problem-solving, it should be set aside until a sub-team can discuss, usually immediately after the standup completes.
==== Assigning tasks ====
One of the intended benefits of agile software development is to empower the team to make choices, as they are closest to the problem. Additionally, they should make choices as close to implementation as possible, to use more timely information in the decision. If team members are assigned tasks by others or too early in the process, the benefits of localized and timely decision making can be lost.
Being assigned work also constrains team members into certain roles (for example, team member A must always do the database work), which limits opportunities for cross-training. Team members themselves can choose to take on tasks that stretch their abilities and provide cross-training opportunities.
==== Scrum master as a contributor ====
In the Scrum framework, which claims to be consistent with agile values and principles, the scrum master role is accountable for ensuring the scrum process is followed and for coaching the scrum team through that process. A common pitfall is for a scrum master to act as a contributor. While not prohibited by the Scrum framework, the scrum master needs to ensure they have the capacity to act in the role of scrum master first and not work on development tasks. A scrum master's role is to facilitate the process rather than create the product.
Having the scrum master also multitasking may result in too many context switches to be productive. Additionally, as a scrum master is responsible for ensuring roadblocks are removed so that the team can make forward progress, the benefit gained by individual tasks moving forward may not outweigh roadblocks that are deferred due to lack of capacity.
==== Lack of test automation ====
Due to the iterative nature of agile development, multiple rounds of testing are often needed. Automated testing helps reduce the impact of repeated unit, integration, and regression tests and frees developers and testers to focus on higher value work.
Test automation also supports continued refactoring required by iterative software development. Allowing a developer to quickly run tests to confirm refactoring has not modified the functionality of the application may reduce the workload and increase confidence that cleanup efforts have not introduced new defects.
==== Allowing technical debt to build up ====
Focusing on delivering new functionality may result in increased technical debt. The team must allow themselves time for defect remediation and refactoring. Technical debt hinders planning abilities by increasing the amount of unscheduled work as production defects distract the team from further progress.
As the system evolves it is important to refactor. Over time the lack of constant maintenance causes increasing defects and development costs.
==== Attempting to take on too much in an iteration ====
A common misconception is that agile software development allows continuous change, however an iteration backlog is an agreement of what work can be completed during an iteration. Having too much work-in-progress (WIP) results in inefficiencies such as context-switching and queueing. The team must avoid feeling pressured into taking on additional work.
==== Fixed time, resources, scope, and quality ====
Agile software development fixes time (iteration duration), quality, and ideally resources in advance (though maintaining fixed resources may be difficult if developers are often pulled away from tasks to handle production incidents), while the scope remains variable. The customer or product owner often pushes for a fixed scope for an iteration. However, teams should be reluctant to commit to the locked time, resources and scope (commonly known as the project management triangle). Efforts to add scope to the fixed time and resources of agile software development may result in decreased quality.
==== Developer burnout ====
Due to the focused pace and continuous nature of agile practices, there is a heightened risk of burnout among members of the delivery team.
== Agile management ==
Agile project management is an iterative development process, where feedback is continuously gathered from users and stakeholders to create the right user experience. Different methods can be used to perform an agile process, these include scrum, extreme programming, lean and kanban.
The term agile management is applied to an iterative, incremental method of managing the design and build activities of engineering, information technology and other business areas that aim to provide new product or service development in a highly flexible and interactive manner, based on the principles expressed in the Manifesto for Agile Software Development.
Agile project management metrics help reduce confusion, identify weak points, and measure team's performance throughout the development cycle. Supply chain agility is the ability of a supply chain to cope with uncertainty and variability on offer and demand. An agile supply chain can increase and reduce its capacity rapidly, so it can adapt to a fast-changing customer demand. Finally, strategic agility is the ability of an organisation to change its course of action as its environment is evolving. The key for strategic agility is to recognize external changes early enough and to allocate resources to adapt to these changing environments.
Agile X techniques may also be called extreme project management. It is a variant of iterative life cycle where deliverables are submitted in stages. The main difference between agile and iterative development is that agile methods complete small portions of the deliverables in each delivery cycle (iteration), while iterative methods evolve the entire set of deliverables over time, completing them near the end of the project. Both iterative and agile methods were developed as a reaction to various obstacles that developed in more sequential forms of project organization. For example, as technology projects grow in complexity, end users tend to have difficulty defining the long-term requirements without being able to view progressive prototypes. Projects that develop in iterations can constantly gather feedback to help refine those requirements.
Agile management also offers a simple framework promoting communication and reflection on past work amongst team members. Teams who were using traditional waterfall planning and adopted the agile way of development typically go through a transformation phase and often take help from agile coaches who help guide the teams through a smoother transformation. There are typically two styles of agile coaching: push-based and pull-based agile coaching. Here a "push-system" can refer to an upfront estimation of what tasks can be fitted into a sprint (pushing work) e.g. typical with scrum; whereas a "pull system" can refer to an environment where tasks are only performed when capacity is available. Agile management approaches have also been employed and adapted to the business and government sectors. For example, within the federal government of the United States, the United States Agency for International Development (USAID) is employing a collaborative project management approach that focuses on incorporating collaborating, learning and adapting (CLA) strategies to iterate and adapt programming.
Agile methods are mentioned in the Guide to the Project Management Body of Knowledge (PMBOK Guide 6th Edition) under the Product Development Lifecycle definition:
Within a project life cycle, there are generally one or more phases
that are associated with the development of the product, service, or result. These are called a development life cycle (...) Adaptive life cycles are agile, iterative, or incremental. The detailed scope is defined and approved before the start of an iteration. Adaptive life cycles are also referred to as agile or change-driven life cycles.
=== Applications outside software development ===
According to Jean-Loup Richet (research fellow at ESSEC Institute for Strategic Innovation & Services) "this approach can be leveraged effectively for non-software products and for project management in general, especially in areas of innovation and uncertainty." The result is a product or project that best meets current customer needs and is delivered with minimal costs, waste, and time, enabling companies to achieve bottom line gains earlier than via traditional approaches.
Agile software development methods have been extensively used for development of software products and some of them use certain characteristics of software, such as object technologies. However, these techniques can be applied to the development of non-software products, such as computers, medical devices, food, clothing, and music. Agile software development methods have been used in non-development IT infrastructure deployments and migrations. Some of the wider principles of agile software development have also found application in general management (e.g., strategy, governance, risk, finance) under the terms business agility or agile business management. Agile software methodologies have also been adopted for use with the learning engineering process, an iterative data-informed process that applies human-centered design, and data informed decision-making to support learners and their development.
Agile software development paradigms can be used in other areas of life such as raising children. Its success in child development might be founded on some basic management principles; communication, adaptation, and awareness. In a TED Talk, Bruce Feiler shared how he applied basic agile paradigms to household management and raising children.
== Criticism ==
Agile practices have been cited as potentially inefficient in large organizations and certain types of development. Many organizations believe that agile software development methodologies are too extreme and adopt a hybrid approach that mixes elements of agile software development and plan-driven approaches. Some methods, such as dynamic systems development method (DSDM) attempt this in a disciplined way, without sacrificing fundamental principles.
The increasing adoption of agile practices has also been criticized as being a management fad that simply describes existing good practices under new jargon, promotes a one size fits all mindset towards development strategies, and wrongly emphasizes method over results.
Alistair Cockburn organized a celebration of the 10th anniversary of the Manifesto for Agile Software Development in Snowbird, Utah on 12 February 2011, gathering some 30+ people who had been involved at the original meeting and since. A list of about 20 elephants in the room ('undiscussable' agile topics/issues) were collected, including aspects: the alliances, failures and limitations of agile software development practices and context (possible causes: commercial interests, decontextualization, no obvious way to make progress based on failure, limited objective evidence, cognitive biases and reasoning fallacies), politics and culture. As Philippe Kruchten wrote:
The agile movement is in some ways a bit like a teenager: very self-conscious, checking constantly its appearance in a mirror, accepting few criticisms, only interested in being with its peers, rejecting en bloc all wisdom from the past, just because it is from the past, adopting fads and new jargon, at times cocky and arrogant. But I have no doubts that it will mature further, become more open to the outside world, more reflective, and therefore, more effective.
The "Manifesto" may have had a negative impact on higher education management and leadership, where it suggested to administrators that slower traditional and deliberative processes should be replaced with more "nimble" ones. The concept rarely found acceptance among university faculty.
Another criticism is that in many ways, agile management and traditional management practices end up being in opposition to one another. A common criticism of this practice is that the time spent attempting to learn and implement the practice is too costly, despite potential benefits. A transition from traditional management to agile management requires total submission to agile and a firm commitment from all members of the organization to seeing the process through. Issues like unequal results across the organization, too much change for employees' ability to handle, or a lack of guarantees at the end of the transformation are just a few examples.
== See also ==
Cross-functional team
Scrum (software development)
Fail fast (business), a related subject in business management
Kanban
Agile leadership
Agile contracts
Rational unified process
== References ==
== Further reading ==
== External links ==
Agile Manifesto
Agile Glossary of the Agile Alliance
The New Methodology - Martin Fowler's description of the background to agile methods
AgilePatterns.org | Wikipedia/Agile_methodology |
Mac operating systems were developed by Apple Inc. in a succession of two major series.
In 1984, Apple debuted the operating system that is now known as the classic Mac OS with its release of the original Macintosh System Software. The system, rebranded Mac OS in 1997, was pre-installed on every Macintosh until 2002 and offered on Macintosh clones shortly in the 1990s. It was noted for its ease of use, and also criticized for its lack of modern technologies compared to its competitors.
The current Mac operating system is macOS, originally named Mac OS X until 2012 and then OS X until 2016. It was developed between 1997 and 2001 after Apple's purchase of NeXT. It brought an entirely new architecture based on NeXTSTEP, a Unix system, that eliminated many of the technical challenges that the classic Mac OS faced, such as problems with memory management. The current macOS is pre-installed with every Mac and receives a major update annually. It is the basis of Apple's current system software for its other devices – iOS, iPadOS, watchOS, and tvOS.
Prior to the introduction of Mac OS X, Apple experimented with several other concepts, releasing different products designed to bring the Macintosh interface or applications to Unix-like systems or vice versa, A/UX, MAE, and MkLinux. Apple's effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, and Copland.
Although the classic Mac OS and macOS (Mac OS X) have different architectures, they share a common set of GUI principles, including a menu bar across the top of the screen; the Finder shell, featuring a desktop metaphor that represents files and applications using icons and relates concepts like directories and file deletion to real-world objects like folders and a trash can; and overlapping windows for multitasking.
Before the arrival of the Macintosh in 1984, Apple's history of operating systems began with its Apple II computers in 1977, which run Apple DOS, ProDOS, and GS/OS; the Apple III in 1980 runs Apple SOS; and the Lisa in 1983 which runs Lisa OS and later MacWorks XL, a Macintosh emulator. Apple developed the Newton OS for its Newton personal digital assistant from 1993 to 1997.
Apple launched several new operating systems based on the core of macOS, including iOS in 2007 for its iPhone, iPad, and iPod Touch mobile devices and in 2017 for its HomePod smart speakers; watchOS in 2015 for the Apple Watch; and tvOS in 2015 for the Apple TV set-top box.
== Classic Mac OS ==
The classic Mac OS is the original Macintosh operating system introduced in 1984 alongside the first Macintosh and remained in primary use on Macs until Mac OS X in 2001.
Apple released the original Macintosh on January 24, 1984; its early system software is partially based on Lisa OS, and inspired by the Alto computer, which former Apple CEO Steve Jobs previewed at Xerox PARC. It was originally named "System Software", or simply "System"; Apple rebranded it as "Mac OS" in 1996 due in part to its Macintosh clone program that ended one year later.
Classic Mac OS is characterized by its monolithic design. Initial versions of the System Software run one application at a time. System 5 introduced cooperative multitasking. System 7 supports 32-bit memory addressing and virtual memory, allowing larger programs. Later updates to the System 7 enable the transition to the PowerPC architecture. The system was considered user-friendly, but its architectural limitations were critiqued, such as limited memory management, lack of protected memory and access controls, and susceptibility to conflicts among extensions.
=== Releases ===
Nine major versions of the classic Mac OS were released. The name "Classic" that now signifies the system as a whole is a reference to a compatibility layer that helped ease the transition to Mac OS X.
Macintosh System Software – "System 1", released in 1984
System Software 2, 3, and 4 – released between 1985 and 1987
System Software 5 – released in 1987
System Software 6 – released in 1988
System 7 / Mac OS 7.6 – released in 1991
Mac OS 8 – released in 1997
Mac OS 9 – final major version, released in 1999
== Mac OS X, OS X, and macOS ==
The system was launched as Mac OS X, renamed OS X from 2012—2016, and then renamed macOS as the current Mac operating system that officially succeeded the classic Mac OS in 2001.
The system was originally marketed as simply "version 10" of Mac OS, but it has a history that is largely independent of the classic Mac OS. It is a Unix-based operating system built on NeXTSTEP and other NeXT technology from the late 1980s until early 1997, when Apple purchased the company and its CEO Steve Jobs returned to Apple. Precursors to Mac OS X include OPENSTEP, Apple's Rhapsody project, and the Mac OS X Public Beta.
macOS is based on Apple's open source Darwin operating system, which is based on the XNU kernel and BSD.
macOS is the basis for some of Apple's other operating systems, including iPhone OS/iOS, iPadOS, watchOS, tvOS, and visionOS.
=== Releases ===
==== Desktop ====
The first version of the system was released on March 24, 2001, supporting the Aqua user interface. Since then, several more versions adding newer features and technologies have been released. Since 2011, new releases have been offered annually.
Mac OS X 10.0 – codenamed "Cheetah", released Saturday, March 24, 2001
Mac OS X 10.1 – codenamed "Puma", released Tuesday, September 25, 2001
Mac OS X Jaguar – version 10.2, released Friday, August 23, 2002
Mac OS X Panther – version 10.3, released Friday, October 24, 2003
Mac OS X Tiger – version 10.4, released Friday, April 29, 2005
Mac OS X Leopard – version 10.5, released Friday, October 26, 2007
Mac OS X Snow Leopard – version 10.6, publicly unveiled on Monday, June 8, 2009
Mac OS X Lion – version 10.7, released Wednesday, July 20, 2011
OS X Mountain Lion – version 10.8, released Wednesday, July 25, 2012
OS X Mavericks – version 10.9, released Tuesday, October 22, 2013
OS X Yosemite – version 10.10, released Thursday, October 16, 2014
OS X El Capitan – version 10.11, released Wednesday, September 30, 2015
macOS Sierra – version 10.12, released Tuesday, September 20, 2016
macOS High Sierra – version 10.13, released Monday, September 25, 2017
macOS Mojave – version 10.14, released Monday, September 24, 2018
macOS Catalina – version 10.15, released Monday, October 7, 2019
macOS Big Sur – version 11, released Thursday, November 12, 2020
macOS Monterey – version 12, released Monday, October 25, 2021
macOS Ventura – version 13, released Monday, October 24, 2022
macOS Sonoma - version 14, released Tuesday, September 26, 2023
macOS Sequoia - version 15, released Monday, September 16, 2024
macOS 10.16's version number was updated to 11.0 in the third beta. The third beta version of macOS Big Sur is 11.0 Beta 3 instead of 10.16 Beta 3.
==== Server ====
An early server computing version of the system was released in 1999 as a technology preview. It was followed by several more official server-based releases. Server functionality has instead been offered as an add-on for the desktop system since 2011.
Mac OS X Server 1.0 – code named "Hera", released in 1999
Mac OS X Server – later called "OS X Server" and "macOS Server", released between 2001 and 2022.
== Other projects ==
=== Shipped ===
==== A/ROSE ====
The Apple Real-time Operating System Environment (A/ROSE) is a small embedded operating system which runs on the Macintosh Coprocessor Platform, an expansion card for the Macintosh. It is a single "overdesigned" hardware platform on which third-party vendors build practically any product, reducing the otherwise heavy workload of developing a NuBus-based expansion card. The first version of the system was ready for use in February 1988.
==== A/UX ====
In 1988, Apple released its first UNIX-based OS, A/UX, which is a UNIX operating system with the Mac OS look and feel. It was not very competitive for its time, due in part to the crowded UNIX market and Macintosh hardware lacking high-end design features present on workstation-class computers. Most of its sales was to the U.S. government, where MacOS lacks POSIX compliance.
==== MAE ====
The Macintosh Application Environment (MAE) is a software package introduced by Apple in 1994 that allows certain Unix-based computer workstations to run Macintosh applications. MAE uses the X Window System to emulate a Macintosh Finder-style graphical user interface. The last version, MAE 3.0, is compatible with System 7.5.3. MAE was published for Sun Microsystems SPARCstation and Hewlett-Packard systems. It was discontinued on May 14, 1998.
==== MkLinux ====
Announced at the 1996 Worldwide Developers Conference (WWDC), MkLinux is an open source operating system that was started by the OSF Research Institute and Apple in February 1996 to port Linux to the PowerPC platform, and thus Macintosh computers. In mid 1998, the community-led MkLinux Developers Association took over development of the operating system. MkLinux is short for "Microkernel Linux", which refers to its adaptation of the monolithic Linux kernel to run as a server hosted atop the Mach microkernel version 3.0.
=== Cancelled projects ===
==== Star Trek ====
The Star Trek project (as in "to boldly go where no Mac has gone before") was a secret prototype beginning in 1992, to port the classic Mac OS to Intel-compatible x86 personal computers. In partnership with Apple and with support from Intel, the project was instigated by Novell, which was looking to integrate its DR-DOS with the Mac OS GUI as a mutual response to the monopoly of Microsoft's Windows 3.0 and MS-DOS. A team consisting of four from Apple and four from Novell was got the Macintosh Finder and some basic applications such as QuickTime, running smoothly. The project was canceled one year later in early 1993, but was partially reused when porting the Mac OS to PowerPC.
==== Taligent ====
Taligent (a portmanteau of "talent" and "intelligent") is an object-oriented operating system and the company producing it. Started as the Pink project within Apple to provide a replacement for the classic Mac OS, it was later spun off into a joint venture with IBM as part of the AIM alliance, with the purpose of building a competing platform to Microsoft Cairo and NeXTSTEP. The development process never worked, and has been cited as an example of a project death march. Apple pulled out of the project in 1995 before the code had been delivered.
==== Copland ====
Copland was a project at Apple to create an updated version of the classic Mac OS. It was to have introduced protected memory, preemptive multitasking, and new underlying operating system features, yet still be compatible with existing Mac software. They originally planned the follow-up release Gershwin to add multithreading and other advanced features. New features were added more rapidly than they could be completed, and the completion date slipped into the future with no sign of a release. In 1996, Apple canceled the project outright and sought a suitable third-party replacement. Copland development ended in August 1996, and in December 1996, Apple announced that it was buying NeXT for its NeXTSTEP operating system.
== Timeline ==
== See also ==
Comparison of operating systems
History of the graphical user interface
Mac
List of Mac software
== References ==
== External links ==
Media related to Macintosh operating systems at Wikimedia Commons | Wikipedia/Macintosh_operating_systems |
Computer reservation systems, or central reservation systems (CRS), are computerized systems used to store and retrieve information and conduct transactions related to air travel, hotels, car rental, or other activities. Originally designed and operated by airlines, CRSs were later extended for use by travel agencies, and global distribution systems (GDSs) to book and sell tickets for multiple airlines. Most airlines have outsourced their CRSs to GDS companies, which also enable consumer access through Internet gateways.
Modern GDSs typically also allow users to book hotel rooms, rental cars, airline tickets as well as other activities and tours. They also provide access to railway reservations and bus reservations in some markets, although these are not always integrated with the main system. These are also used to relay computerized information for users in the hotel industry, making reservation and ensuring that the hotel is not overbooked.
Airline reservations systems may be integrated into a larger passenger service system, which also includes an airline inventory system and a departure control system. The current centralised reservation systems are vulnerable to network-wide system disruptions.
== History ==
=== MARS-1 ===
The MARS-1 train ticket reservation system was designed and planned in the 1950s by the Japanese National Railways' R&D Institute, now the Railway Technical Research Institute, with the system eventually being produced by Hitachi in 1958. It was the world's first seat reservation system for trains. The MARS-1 was capable of reserving seat positions, and was controlled by a transistor computer with a central processing unit and a 400,000-bit magnetic drum memory unit to hold seating files. It used many registers, to indicate whether seats in a train were vacant or reserved to accelerate searches of and updates to seat patterns, for communications with terminals, printing reservation notices, and CRT displays.
=== Remote access ===
In 1953 Trans-Canada Airlines (TCA) started investigating a computer-based system with remote terminals, testing one design on the University of Toronto's Ferranti Mark 1 machine that summer. Though successful, the researchers found that input and output was a major problem. Ferranti Canada became involved in the project and suggested a new system using punched cards and a transistorized computer in place of the unreliable tube-based Mark I. The resulting system, ReserVec, started operation in 1962, and took over all booking operations in January 1963. Terminals were placed in all of TCA's ticketing offices, allowing all queries and bookings to complete in about one second with no remote operators needed.
In 1953 American Airlines CEO C. R. Smith chanced to sit next to R. Blair Smith, a senior IBM sales representative, on a flight from Los Angeles to New York. C.R. invited Blair to visit their Reservisor system and look for ways that IBM could improve the system. Blair alerted Thomas Watson Jr. that American was interested in a major collaboration, and a series of low-level studies started. Their idea of an automated airline reservation system (ARS) resulted in a 1959 venture known as the Semi-Automatic Business Research Environment (SABRE), launched the following year. By the time the network was completed in December 1964, it was the largest civil data processing system in the world.
Other airlines established their own systems. Pan Am launched its PANAMAC system in 1964. Delta Air Lines launched the Delta Automated Travel Account System (DATAS) in 1968. United Airlines and Trans World Airlines followed in 1971 with the Apollo Reservation System and Programmed Airline Reservation System (PARS), respectively. Soon, travel agents began pushing for a system that could automate their side of the process by accessing the various ARSes directly to make reservations. Fearful this would place too much power in the hands of agents, American Airlines executive Robert Crandall proposed creating an industry-wide computer reservation system to be a central clearing house for U.S. travel; other airlines demurred, citing fear that United States antitrust law may have been breached.
=== Travel agent access ===
In 1976, United Airlines began offering its Apollo system to travel agents; while it would not allow the agents to book tickets on United's competitors, the marketing value of the convenient terminal proved indispensable. SABRE, PARS, and DATAS were soon released to travel agents as well. Following airline deregulation in 1978, an efficient CRS proved particularly important; by some counts, Texas Air executive Frank Lorenzo purchased money-losing Eastern Air Lines specifically to gain control of its SystemOne CRS.
Also in 1976 Videcom international with British Airways, British Caledonian and CCL launched Travicom, the world's first multi-access reservations system (wholly based on Videcom technology), forming a network providing distribution for initially two and subsequently 49 subscribing international airlines (including British Airways, British Caledonian, Trans World Airlines, Pan Am, Qantas, Singapore Airlines, Air France, Lufthansa, Scandinavian Airlines System, Air Canada, KLM, Alitalia, Cathay Pacific and Japan Airlines) to thousands of travel agents in the UK. It allowed agents and airlines to communicate via a common distribution language and network, handling 97% of UK airline business trade bookings by 1987. The system went on to be replicated by Videcom in other areas of the world including the Middle East (DMARS), New Zealand, Kuwait (KMARS), Ireland, Caribbean, United States and Hong Kong. Travicom was a trading name for Travel Automation Services Ltd. When British Airways (who by then owned 100% of Travel Automation Services Ltd) chose to participate in the development of the Galileo system Travicom changed its trading name to Galileo UK and a migration process was put in place to move agencies from Travicom to Galileo.
European airlines also began to invest in the field in the 1980s initially by deploying their own reservation systems in their homeland, propelled by growth in demand for travel as well as technological advances which allowed GDSes to offer ever-increasing services and searching power. In 1987, a consortium led by Air France and West Germany's Lufthansa developed Amadeus, modeled on SystemOne. Amadeus Global Travel Distribution was launched in 1992. In 1990, Delta, Northwest Airlines, and Trans World Airlines formed Worldspan, and in 1993, another consortium (including British Airways, KLM, and United Airlines, among others) formed the competing company Galileo GDS based on Apollo. Numerous smaller companies such as KIU have also formed, aimed at niche markets not catered for by the four largest networks, including the low-cost carrier segment, and small and medium size domestic and regional airlines.
== Trends ==
At first, airlines' reservation systems preferred their owners' flights to others. By 1987, United States government regulations required SABRE and other American systems to be neutral, with airlines instead selling access to them for profit. European airlines' systems were still skewed toward their owners, but Flight International reported that they would inevitably become neutral as well.
For many years, global distribution systems (GDSs) have had a dominant position in the travel industry. To bypass the GDSs, and avoid high GDS fees, airlines have started to sell flights directly through their websites. Another way to bypass the GDSs is direct connection to travel agencies, such as that of American Airlines.
== Major airline CRS systems ==
== Other systems ==
Polyot-Sirena
== See also ==
== References == | Wikipedia/Reservation_systems |
Workforce management (WFM) is an institutional process that maximizes performance levels and competency for an organization. The process includes all the activities needed to maintain a productive workforce, such as field service management, human resource management, performance and training management, data collection, recruiting, budgeting, forecasting, scheduling and analytics.
Workforce management provides a common set of performance-based tools and software to support corporate management, front-line supervisors, store managers and workers across manufacturing, distribution, transportation, and retail operations. It is sometimes referred to as HRM systems, Workforce asset management, or part of ERP systems.
== Definition ==
As workforce management has developed from a traditional approach of staff scheduling to improve time management, it has become more integrated and demand-oriented to optimize the scheduling of staff. Besides the two core aspects of demand-orientation and optimization, workforce management may also incorporate:
forecasting of workload and required staff
involvement of employees into the scheduling process
management of working times and accounts
analysis and monitoring of the entire process.
The starting point is a clear definition of the work required through engineered standards and optimal methods for performing each task as efficiently and safely as possible. Based on this foundation and demand-based forecasts, workers are scheduled, tasks are assigned, performance is measured, feedback is provided and incentives are computed and paid. In addition, online training is provided along with supervisor-based coaching to bring all workers up to required levels of proficiency. Workforce management is a complete approach designed to make workforce as productive as possible, reduce labor costs, and improve customer service.
=== Field service management ===
Workforce management also uses the process of field service management in order to have oversight of company's resources not used on company property. Examples include:
Demand management – to help forecast work orders to plan the number and expertise of staff that will be needed
Workforce scheduler – using predefined rules to automatically optimise the schedule and use of resources (people, parts, vehicles)
Workforce dispatcher – automatically assigning work orders within predefined zones to particular technicians
Mobile solutions – allowing dispatchers and technicians to communicate in real time.
=== Market growth ===
In the 1980s and 1990s, entrepreneurs focused on topics such as supply chain management, production planning systems or enterprise resource planning. As cost pressures have increased, managers have turned their attention to human resources issues. In all personnel-intensive industries, workforce management has become an important strategic element in corporate management. The process has experienced growth in all sectors, including healthcare. The rise of the gig economy has also gone hand in hand with the rise of workforce management practices.
=== Mobile workforce management ===
As society continues to adopt new technologies such as smartphones and enterprise mobility tools, more companies are allowing employees to become mobile. Mobile workforce management refers to activities used to schedule the employees working outside the company premises. It helps distribute workforce efficiently across various departments in an institution. The need for social distancing imposed by the COVID-19 pandemic has brought about major changes in both employer's and employee's vision of remote work, which will likely have a long-lasting impact on workforce organization and management in the coming years.
== Software ==
Workforce management solutions can be deployed enterprise-wide and through mobile platforms. While special software is commonly used in numerous areas such as ERP (enterprise resource planning), SLM (service lifecycle management), CRM (customer relationship management) and HR (human resources) management, the management of the workforce is often still handled using spreadsheet programs or time recording. This often results in expensive overtime, non-productive idle times, high fluctuation rates, poor customer service and opportunity costs being incurred. By using a software solution for demand-oriented workforce management, planners can optimize staffing by creating schedules that at all times conform to the forecasted requirements. At the same time, a workforce management solution helps users to observe all relevant legislations, local agreements and the contracts of individual employees – including work-life balance guidelines.
A key aspect of workforce management is scheduling. This is achieved by establishing likely demand by analyzing historical data (such as the number and duration of customer contacts, sales figures, check-out transactions or orders to be handled). Many workforce management systems also offer manual adjustment capabilities. The calculated forecast values are then converted into actual staffing requirements by means of an algorithm that is adjusted to the particular use case. The algorithm itself is based on the work of Erlang though most modern adaptations of workforce management have shifted towards a richer state management, and optimizations to the original idea.
Current and future staffing requirements, short-term peak loads, availabilities, holidays, budget allowances, skills, labour law-related restrictions, as well as wage and contractual terms have to be integrated into the planning process to guarantee optimal staff deployment. In the workforce management process, the integration of employees is an important factor. In several workforce management systems, employees can log in their availability or planned absences and they can bid for specific shifts so long as they have the necessary skills for the activities planned for these shifts.
=== Delivery ===
The three methods of delivery for contact center technologies are on-premises solution, hosted or cloud-based computing.
An on-premises system is one in which hardware and software must be physically installed, deployed and maintained at the business. All equipment is purchased up front. It is traditionally associated with large enterprises with the budget and the space to acquire the capabilities deemed necessary, and the personnel available to configure and modify systems.
A hosted system relies on an outside service provider. Software is purchased and installed in a data center on either physical or virtual servers that may be owned or leased by the business. Implementation is similar to an on-premises solution, but the cost is typically lower because hardware need not be purchased. However, the business must pay an initial provisioning fee as well as a monthly fee for the rental or usage of the hosting center’s equipment and personnel.
Cloud computing converts such physical resources as processors and storage into Internet resources. By developing applications in a virtual environment, a company’s computing infrastructure is treated as a utility service, and the company pays only for the time and capacity it needs. Cloud eliminates issues such as computing capacity, physical space, bandwidth and storage.
== See also ==
Medical outsourcing
Meeting scheduling tool
Project workforce management
Project management
Strategic service management
Time and attendance
Time tracking software
Workforce optimization
Employee Scheduling Software
Timesheet
== References ==
== Sources ==
AMR Research, 2006: The Human Capital Management Applications Report, 2005–2010: "AMR Research Releases Report Showing Human Capital Management and Customer Management as Fastest-Growing Enterprise Application Segments at 10%".
DMG Consulting: 2009 Contact Centre Workforce Management Market Report: "DMG Consulting: Workforce Management Market Grew 7.4 Percent in 2008".
Workforce Asset Management Book of Knowledge (John Wiley & Sons Publishing, 2013): Disselkamp, Lisa, ed. (2013). Workforce Asset Management Book of Knowledge. doi:10.1002/9781118636442. ISBN 9781118636442.
Portage Communications, LLC: "Erlang Calculations Compared to Simulation Methods for Workforce Management". 20 July 2016. | Wikipedia/Workforce_management |
IT Application Portfolio Management (APM) is a practice that has emerged in mid to large-size information technology (IT) organizations since the mid-1990s. Application Portfolio Management attempts to use the lessons of financial portfolio management to justify and measure the financial benefits of each application in comparison to the costs of the application's maintenance and operations.
== Evolution of the practice ==
Likely the earliest mention of the Applications Portfolio was in Cyrus Gibson and Richard Nolan's HBR article "Managing the Four Stages of EDP Growth" in 1974.
Gibson and Nolan posited that businesses' understanding and successful use of IT "grows" in predictable stages and a given business' progress through the stages can be measured by observing the Applications Portfolio, User Awareness, IT Management Practices, and IT Resources within the context of an analysis of overall IT spending.
Nolan, Norton & Co. pioneered the use of these concepts in practice with studies at DuPont, Deere, Union Carbide, IBM and Merrill Lynch among others. In these "Stage Assessments" they measured the degree to which each application supported or "covered" each business function or process, spending on the application, functional qualities, and technical qualities. These measures provided a comprehensive view of the application of IT to the business, the strengths and weaknesses, and a road map to improvement.
APM was widely adopted in the late 1980s and through the 1990s as organizations began to address the threat of application failure when the date changed to the year 2000 (a threat that became known as Year 2000 or Y2K). During this time, tens of thousands of IT organizations around the world developed a comprehensive list of their applications, with information about each application.
In many organizations, the value of developing this list was challenged by business leaders concerned about the cost of addressing the Y2K risk. In some organizations, the notion of managing the portfolio was presented to the business people in charge of the Information Technology budget as a benefit of performing the work, above and beyond managing the risk of application failure.
There are two main categories of application portfolio management solutions, generally referred to as 'Top Down' and 'Bottom Up' approaches. The first need in any organization is to understand what applications exist and their main characteristics (such as flexibility, maintainability, owner, etc.), typically referred to as the 'Inventory'. Another approach to APM is to gain a detailed understanding of the applications in the portfolio by parsing the application source code and its related components into a repository database (i.e. 'Bottom Up'). Application mining tools, now marketed as APM tools, support this approach.
Hundreds of tools are available to support the 'Top Down' approach. This is not surprising, because the majority of the task is to collect the right information; the actual maintenance and storage of the information can be implemented relatively easily. For that reason, many organizations bypass using commercial tools and use Microsoft Excel to store inventory data. However, if the inventory becomes complex, Excel can become cumbersome to maintain. Automatically updating the data is not well supported by an Excel-based solution. Finally, such an Inventory solution is completely separate from the 'Bottom Up' understanding needs.
== Business case for APM ==
According to Forrester Research, "For IT operating budgets, enterprises spend two-thirds or more on ongoing operations and maintenance.".
It is common to find organizations that have multiple systems that perform the same function. Many reasons may exist for this duplication, including the former prominence of departmental computing, the application silos of the 1970s and 1980s, the proliferation of corporate mergers and acquisitions, and abortive attempts to adopt new tools. Regardless of the duplication, each application is separately maintained and periodically upgraded, and the redundancy increases complexity and cost.
With a large majority of expenses going to manage the existing IT applications, the transparency of the current inventory of applications and resource consumption is a primary goal of Application Portfolio Management. This enables firms to: 1) identify and eliminate partially and wholly redundant applications, 2) quantify the condition of applications in terms of stability, quality, and maintainability, 3) quantify the business value/impact of applications and the relative importance of each application to the business, 4) allocate resources according to the applications' condition and importance in the context of business priorities.
Transparency also aids strategic planning efforts and diffuses business / IT conflict, because when business leaders understand how applications support their key business functions, and the impact of outages and poor quality, conversations turn away from blaming IT for excessive costs and toward how to best spend precious resources to support corporate priorities.
== Portfolio ==
Taking ideas from investment portfolio management, APM practitioners gather information about each application in use in a business or organization, including the cost to build and maintain the application, the business value produced, the quality of the application, and the expected lifespan. Using this information, the portfolio manager is able to provide detailed reports on the performance of the IT infrastructure in relation to the cost to own and the business value delivered.
== Definition of an application ==
In application portfolio management, the definition of an application is a critical component. Many service providers help organizations create their own definition, due to the often contentious results that come from these definitions.
Application software — An executable software component or tightly coupled set of executable software components (one or more), deployed together, that deliver some or all of a series of steps needed to create, update, manage, calculate or display information for a specific business purpose. In order to be counted, each component must not be a member of another application.
Software component — An executable set of computer instructions contained in a single deployment container in such a way that it cannot be broken apart further. Examples include a Dynamic Link Library, an ASP web page, and a command line "EXE" application. A zip file may contain more than one software component because it is easy to break them down further (by unpacking the ZIP archive).
Software application and software component are technical terms used to describe a specific instance of the class of application software for the purposes of IT portfolio management. See application software for a definition for non-practitioners of IT Management or Enterprise Architecture.
Software application portfolio management requires a fairly detailed and specific definition of an application in order to create a catalog of applications installed in an organization.
== The requirements of a definition for an application ==
The definition of an application has the following needs in the context of application portfolio management:
It must be simple for business team members to explain, understand, and apply.
It must make sense to development, operations, and project management in the IT groups.
It must be useful as an input to a complex function whose output is the overall cost of the portfolio. In other words, there are many factors that lead to the overall cost of an IT portfolio. The sheer number of applications is one of those factors. Therefore, the definition of an application must be useful in that calculation.
It must be useful for the members of the Enterprise Architecture team who are attempting to judge a project with respect to their objectives for portfolio optimization and simplification.
It must clearly define the boundaries of an application so that a person working on a measurable 'portfolio simplification' activity cannot simply redefine the boundaries of two existing applications in such a way as to call them a single application.
Many organizations will readdress the definition of an application within the context of their IT portfolio management and governance practices. For that reason, this definition should be considered as a working start.
== Examples ==
The definition of an application can be difficult to convey clearly. In an IT organization, there might be subtle differences in the definition among teams and even within one IT team. It helps to illustrate the definition by providing examples. The section below offers some examples of things that are applications, things that are not applications, and things that comprise two or more applications.
=== Inclusions ===
By this definition, the following are applications:
A web service endpoint that presents three web services: InvoiceCreate, InvoiceSearch, and InvoiceDetailGet
A service-oriented business application (SOBA) that presents a user interface for creating invoices, and that turns around and calls the InvoiceCreate service. (note that the service itself is a different application).
A mobile application that is published to an enterprise application store and thus deployed to employee-owned or operated portable devices enabling authenticated access to data and services.
A legacy system composed of a rich client, a server-based middle tier, and a database, all of which are tightly coupled. (e.g. changes in one are very likely to trigger changes in another).
A website publishing system that pulls data from a database and publishes it to an HTML format as a sub-site on a public URL.
A database that presents data to a Microsoft Excel workbook that queries the information for layout and calculations. This is interesting in that the database itself is an application unless the database is already included in another application (like a legacy system).
An Excel spreadsheet that contains a coherent set of reusable macros that deliver business value. The spreadsheet itself constitutes a deployment container for the application (like a TAR or CAB file).
A set of ASP or PHP web pages that work in conjunction with one another to deliver the experience and logic of a web application. It is entirely possible that a sub-site would qualify as a separate application under this definition if the coupling is loose.
A web service end point established for machine-to-machine communication (not for human interaction), but which can be rationally understood to represent one or more useful steps in a business process.
=== Exclusions ===
The following are not applications:
An HTML website.
A database that contains data but is not part of any series of steps to deliver business value using that data.
A web service that is structurally incapable of being part of a set of steps that provides value. For example, a web service that requires incoming data that breaks shared schema.
A standalone batch script that compares the contents of two databases by making calls to each and then sends e-mail to a monitoring alias if data anomalies are noticed. In this case, the batch script is very likely to be tightly coupled with at least one of the two databases, and therefore should be included in the application boundary that contains the database that it is most tightly coupled with.
=== Composites ===
The following are many applications:
A composite SOA application composed of a set of reusable services and a user interface that leverages those services. There are at least two applications here (the user interface and one or more service components). Each service is not counted as an application.
A legacy client-server app that writes to a database to store data and an Excel spreadsheet that uses macros to read data from the database to present a report. There are TWO apps in this example. The database clearly belongs to the legacy app because it was developed with it, delivered with it, and is tightly coupled to it. This is true even if the legacy system uses the same stored procedures as the Excel spreadsheet.
== Methods and measures for evaluating applications ==
There are many popular financial measures, and even more metrics of different (non-financial or complex) types that are used for evaluating applications or information systems.
=== Return on investment (ROI) ===
Return on Investment is one of the most popular performance measurement and evaluation metrics used in business analysis. ROI analysis (when applied correctly) is a powerful tool for evaluating existing information systems and making informed decisions on software acquisitions and other projects. However, ROI is a metric designed for a certain purpose – to evaluate profitability or financial efficiency. It cannot reliably substitute for many other financial metrics in providing an overall economic picture of the information solution. The attempts at using ROI as the sole or principal metric for decision making regarding in-formation systems cannot be productive. It may be appropriate in a very limited number of cases/projects. ROI is a financial measure and does not provide information about efficiency or effectiveness of the information systems.
=== Economic value added (EVA) ===
A measure of a company's financial performance based on the residual wealth calculated by deducting cost of capital from its operating profit (adjusted for taxes on a cash basis). (Also referred to as "economic profit".)
Formula = Net Operating Profit After Taxes (NOPAT) - (Capital * Cost of Capital)
=== Total cost of ownership (TCO) ===
Total Cost of Ownership is a way to calculate what the application will cost over a defined period of time. In a TCO model, costs for hardware, software, and labor are captured and organized into the various application life cycle stages. An in depth TCO model helps management understand the true cost of the application as it attempts to measure build, run/support, and indirect costs. Many large consulting firms have defined strategies for building a complete TCO model.
=== Total economic impact (TEI) ===
TEI was developed by Forrester Research Inc. Forrester claims TEI systematically looks at the potential effects of technology investments across four dimensions: cost — impact on IT; benefits — impact on business; flexibility — future options created by the investment; risk — uncertainty.
=== Business value of IT (ITBV) ===
ITBV program was developed by Intel Corporation in 2002.
The program uses a set of financial measurements of business value that are called Business Value Dials (Indicators). It is a multidimensional program, including a business component, and is relatively easy to implement.
=== Applied information economics (AIE) ===
AIE is a decision analysis method developed by Hubbard Decision Research. AIE claims to be "the first truly scientific and theoretically sound method" that builds on several methods from decision theory and risk analysis including the use of Monte Carlo methods. AIE is not used often because of its complexity.
== References == | Wikipedia/Application_portfolio_management |
Media player software is a type of application software for playing multimedia computer files like audio and video files. Media players commonly display standard media control icons known from physical devices such as tape recorders and CD players, such as play ( ), pause ( ), fastforward (⏩️), rewind (⏪), and stop ( ) buttons. In addition, they generally have progress bars (or "playback bars"), which are sliders to locate the current position in the duration of the media file.
Mainstream operating systems have at least one default media player. For example, Windows comes with Windows Media Player, Microsoft Movies & TV and Groove Music, while macOS comes with QuickTime Player and Music. Linux distributions come with different media players, such as SMPlayer, Amarok, Audacious, Banshee, MPlayer, mpv, Rhythmbox, Totem, VLC media player, and xine. Android comes with YouTube Music for audio and Google Photos for video, and smartphone vendors such as Samsung may bundle custom software.
== Functionality focus ==
The basic feature set of media players are a seek bar, a timer with the current and total playback time, playback controls (play, pause, previous, next, stop), playlists, a "repeat" mode, and a "shuffle" (or "random") mode for curiosity and to facilitate searching long timelines of files.
Different media players have different goals and feature sets. Video players are a group of media players that have their features geared more towards playing digital video. For example, Windows DVD Player exclusively plays DVD-Video discs and nothing else. Media Player Classic can play individual audio and video files but many of its features such as color correction, picture sharpening, zooming, set of hotkeys, DVB support and subtitle support are only useful for video material such as films and cartoons. Audio players, on the other hand, specialize in digital audio. For example, AIMP exclusively plays audio formats. MediaMonkey can play both audio and video formats, but many of its features including media library, lyric discovery, music visualization, online radio, audiobook indexing, and tag editing are geared toward consumption of audio material; watching video files on it can be a trying feat. General-purpose media players also do exist. For example, Windows Media Player has exclusive features for both audio and video material, although it cannot match the feature set of Media Player Classic and MediaMonkey combined.
By default, videos are played with fully visible field of view while filling at least either width or height of the viewport to appear as large as possible. Options to change the video's scaling and aspect ratio may include filling the viewport through either stretching or cropping, and "100% view" where each pixel of the video covers exactly one pixel on the screen.
Zooming into the field of view during playback may be implemented through a slider on any screen or with pinch zoom on touch screens, and moving the field of view may be implemented through scrolling by dragging inside the view port or by moving a rectangle inside a miniature view of the entire field of view that denotes the magnified area.
Media player software may have the ability to adjust appearance and acoustics during playback using effects such as mirroring, rotating, cropping, cloning, adjusting colours, deinterlacing, and equalizing and visualizing audio. Easter eggs may be featured, such as a puzzle game on VLC Media Player.
Still snapshots may be extracted directly from a video frame or captured through a screenshot, the former of which is preferred since it preserves videos' original dimensions (height and width). Video players may show a tooltip bubble previewing footage at the position hovered over with the mouse cursor.
A preview tooltip for the seek bar has been implemented on few smartphones through a stylus or a self-capacitive touch screen able to detect a floating finger. Such include the Samsung Galaxy S4, S5 (finger), Note 2, Note 4 (stylus), and Note 3 (both).
Streaming media players may indicate buffered segments of the media in the seek bar.
=== 3D video players ===
3D video players are used to play 2D video in 3D format. A high-quality three-dimensional video presentation requires that each frame of a motion picture be embedded with information on the depth of objects present in the scene. This process involves shooting the video with special equipment from two distinct perspectives or modeling and rendering each frame as a collection of objects composed of 3D vertices and textures, much like in any modern video game, to achieve special effects. Tedious and costly, this method is only used in a small fraction of movies produced worldwide, while most movies remain in the form of traditional 2D images. It is, however, possible to give an otherwise two-dimensional picture the appearance of depth. Using a technique known as anaglyph processing a "flat" picture can be transformed so as to give an illusion of depth when viewed through anaglyph glasses (usually red-cyan). An image viewed through anaglyph glasses appears to have both protruding and deeply embedded objects in it, at the expense of somewhat distorted colors. The method itself is old enough, dating back to the mid-19th century, but it is only with recent advances in computer technology that it has become possible to apply this kind of transformation to a series of frames in a motion picture reasonably fast or even in real-time, i.e. as the video is being played back. Several implementations exist in the form of 3D video players that render conventional 2D video in anaglyph 3D, as well as in the form of 3D video converters that transform video into stereoscopic anaglyph and transcode it for playback with regular software or hardware video players.
== Examples ==
Well known examples of media player software include Windows Media Player, VLC media player, iTunes, Winamp, Media Player Classic, MediaMonkey, foobar2000, AIMP, MusicBee and JRiver Media Center. Most of these also include music library managers.
Although media players are often multi-media, they can be primarily designed for a specific media. For example, Media Player Classic and VLC media player are video-focused while Winamp and iTunes are music-focused, despite all of them supporting both types of media.
== Home theater PC ==
A home theater PC or media center computer is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, photo, audio playback, and sometimes video recording functionality. Although computers with some of these capabilities were available from the late 1980s, the "Home Theater PC" term first appeared in mainstream press in 1996. Since 2007, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content. The term "media center" also refers to specialized computer programs designed to run on standard personal computers.
== See also ==
Comparison of video player software
Comparison of audio player software
== References == | Wikipedia/Media_player_(application_software) |
The police are a constituted body of people empowered by a state with the aim of enforcing the law and protecting the public order as well as the public itself. This commonly includes ensuring the safety, health, and possessions of citizens, and to prevent crime and civil disorder. Their lawful powers encompass arrest and the use of force legitimized by the state via the monopoly on violence. The term is most commonly associated with the police forces of a sovereign state that are authorized to exercise the police power of that state within a defined legal or territorial area of responsibility. Police forces are often defined as being separate from the military and other organizations involved in the defense of the state against foreign aggressors; however, gendarmerie are military units charged with civil policing. Police forces are usually public sector services, funded through taxes.
Law enforcement is only part of policing activity. Policing has included an array of activities in different situations, but the predominant ones are concerned with the preservation of order. In some societies, in the late 18th and early 19th centuries, these developed within the context of maintaining the class system and the protection of private property. Police forces have become ubiquitous and a necessity in complex modern societies. However, their role can sometimes be controversial, as they may be involved to varying degrees in corruption, brutality, and the enforcement of authoritarian rule.
A police force may also be referred to as a police department, police service, constabulary, gendarmerie, crime prevention, protective services, law enforcement agency, civil guard, or civic guard. Members may be referred to as police officers, troopers, sheriffs, constables, rangers, peace officers or civic/civil guards. Ireland differs from other English-speaking countries by using the Irish language terms Garda (singular) and Gardaí (plural), for both the national police force and its members. The word police is the most universal and similar terms can be seen in many non-English speaking countries.
Numerous slang terms exist for the police. Many slang terms for police officers are decades or centuries old with lost etymologies. One of the oldest, cop, has largely lost its slang connotations and become a common colloquial term used both by the public and police officers to refer to their profession.
== Etymology ==
First attested in English in the early 15th century, originally in a range of senses encompassing '(public) policy; state; public order', the word police comes from Middle French police ('public order, administration, government'), in turn from Latin politia, which is the romanization of the Ancient Greek πολιτεία (politeia) 'citizenship, administration, civil polity'. This is derived from πόλις (polis) 'city'.
== History ==
=== Ancient ===
==== China ====
Law enforcement in ancient China was carried out by "prefects" for thousands of years since it developed in both the Chu and Jin kingdoms of the Spring and Autumn period. In Jin, dozens of prefects were spread across the state, each having limited authority and employment period. They were appointed by local magistrates, who reported to higher authorities such as governors, who in turn were appointed by the emperor, and they oversaw the civil administration of their "prefecture", or jurisdiction. Under each prefect were "subprefects" who helped collectively with law enforcement in the area. Some prefects were responsible for handling investigations, much like modern police detectives. Prefects could also be women. Local citizens could report minor judicial offenses against them such as robberies at a local prefectural office. The concept of the "prefecture system" spread to other cultures such as Korea and Japan.
==== Babylonia ====
In Babylonia, law enforcement tasks were initially entrusted to individuals with military backgrounds or imperial magnates during the Old Babylonian period, but eventually, law enforcement was delegated to officers known as paqūdus, who were present in both cities and rural settlements. A paqūdu was responsible for investigating petty crimes and carrying out arrests.
==== Egypt ====
In ancient Egypt evidence of law enforcement exists as far back as the Old Kingdom period. There are records of an office known as "Judge Commandant of the Police" dating to the fourth dynasty. During the fifth dynasty at the end of the Old Kingdom period, warriors armed with wooden sticks were tasked with guarding public places such as markets, temples, and parks, and apprehending criminals. They are known to have made use of trained monkeys, baboons, and dogs in guard duties and catching criminals. After the Old Kingdom collapsed, ushering in the First Intermediate Period, it is thought that the same model applied. During this period, Bedouins were hired to guard the borders and protect trade caravans. During the Middle Kingdom period, a professional police force was created with a specific focus on enforcing the law, as opposed to the previous informal arrangement of using warriors as police. The police force was further reformed during the New Kingdom period. Police officers served as interrogators, prosecutors, and court bailiffs, and were responsible for administering punishments handed down by judges. In addition, there were special units of police officers trained as priests who were responsible for guarding temples and tombs and preventing inappropriate behavior at festivals or improper observation of religious rites during services. Other police units were tasked with guarding caravans, guarding border crossings, protecting royal necropolises, guarding slaves at work or during transport, patrolling the Nile River, and guarding administrative buildings. By the Eighteenth Dynasty of the New Kingdom period, an elite desert-ranger police force called the Medjay was used to protect valuable areas, especially areas of pharaonic interest like capital cities, royal cemeteries, and the borders of Egypt. Though they are best known for their protection of the royal palaces and tombs in Thebes and the surrounding areas, the Medjay were used throughout Upper and Lower Egypt. Each regional unit had its own captain. The police forces of ancient Egypt did not guard rural communities, which often took care of their own judicial problems by appealing to village elders, but many of them had a constable to enforce state laws.
==== Greece ====
In ancient Greece, publicly owned slaves were used by magistrates as police. In Athens, the Scythian Archers (the ῥαβδοῦχοι 'rod-bearers'), a group of about 300 Scythian slaves, was used to guard public meetings to keep order and for crowd control, and also assisted with dealing with criminals, handling prisoners, and making arrests. Other duties associated with modern policing, such as investigating crimes, were left to the citizens themselves. Athenian police forces were supervised by the Areopagus. In Sparta, the Ephors were in charge of maintaining public order as judges, and they used Sparta's Hippeis, a 300-member Royal guard of honor, as their enforcers. There were separate authorities supervising women, children, and agricultural issues. Sparta also had a secret police force called the crypteia to watch the large population of helots, or slaves.
==== Rome ====
In the Roman Empire, the army played a major role in providing security. Roman soldiers detached from their legions and posted among civilians carried out law enforcement tasks. The Praetorian Guard, an elite army unit which was primarily an Imperial bodyguard and intelligence-gathering unit, could also act as a riot police force if required. Local watchmen were hired by cities to provide some extra security. Lictors, civil servants whose primary duty was to act as bodyguards to magistrates who held imperium, could carry out arrests and inflict punishments at their magistrate's command. Magistrates such as tresviri capitales, procurators fiscal and quaestors investigated crimes. There was no concept of public prosecution, so victims of crime or their families had to organize and manage the prosecution themselves. Under the reign of Augustus, when the capital had grown to almost one million inhabitants, 14 wards were created; the wards were protected by seven squads of 1,000 men called vigiles, who acted as night watchmen and firemen. In addition to firefighting, their duties included apprehending petty criminals, capturing runaway slaves, guarding the baths at night, and stopping disturbances of the peace. As well as the city of Rome, vigiles were also stationed in the harbor cities of Ostia and Portus. Augustus also formed the Urban Cohorts to deal with gangs and civil disturbances in the city of Rome, and as a counterbalance to the Praetorian Guard's enormous power in the city. They were led by the urban prefect. Urban Cohort units were later formed in Roman Carthage and Lugdunum.
==== India ====
Law enforcement systems existed in the various kingdoms and empires of ancient India. The Apastamba Dharmasutra prescribes that kings should appoint officers and subordinates in the towns and villages to protect their subjects from crime. Various inscriptions and literature from ancient India suggest that a variety of roles existed for law enforcement officials such as those of a constable, thief catcher, watchman, and detective. In ancient India up to medieval and early modern times, kotwals were in charge of local law enforcement.
==== Achaemenid (First Persian) Empire ====
The Achaemenid Empire had well-organized police forces. A police force existed in every place of importance. In the cities, each ward was under the command of a Superintendent of Police, known as a Kuipan. Police officers also acted as prosecutors and carried out punishments imposed by the courts. They were required to know the court procedure for prosecuting cases and advancing accusations.
==== Israel ====
In ancient Israel and Judah, officials with the responsibility of making declarations to the people, guarding the king's person, supervising public works, and executing the orders of the courts existed in the urban areas. They are repeatedly mentioned in the Hebrew Bible, and this system lasted into the period of Roman rule. The first century Jewish historian Josephus related that every judge had two such officers under his command. Levites were preferred for this role. Cities and towns also had night watchmen. Besides officers of the town, there were officers for every tribe. The temple in Jerusalem was protected by a special temple guard. The Talmud mentions various local officials in the Jewish communities of the Land of Israel and Babylon who supervised economic activity. Their Greek-sounding titles suggest that the roles were introduced under Hellenic influence. Most of these officials received their authority from local courts and their salaries were drawn from the town treasury. The Talmud also mentions city watchmen and mounted and armed watchmen in the suburbs.
==== Africa ====
In many regions of pre-colonial Africa, particularly West and Central Africa, guild-like secret societies emerged as law enforcement. In the absence of a court system or written legal code, they carried out police-like activities, employing varying degrees of coercion to enforce conformity and deter antisocial behavior. In ancient Ethiopia, armed retainers of the nobility enforced law in the countryside according to the will of their leaders. The Songhai Empire had officials known as assara-munidios, or "enforcers", acting as police.
==== The Americas ====
Pre-Columbian civilizations in the Americas also had organized law enforcement. The city-states of the Maya civilization had constables known as tupils. In the Aztec Empire, judges had officers serving under them who were empowered to perform arrests, even of dignitaries. In the Inca Empire, officials called kuraka enforced the law among the households they were assigned to oversee, with inspectors known as tokoyrikoq (lit. 'he who sees all') also stationed throughout the provinces to keep order.
=== Post-classical ===
In medieval Spain, Santas Hermandades, or 'holy brotherhoods', peacekeeping associations of armed individuals, were a characteristic of municipal life, especially in Castile. As medieval Spanish kings often could not offer adequate protection, protective municipal leagues began to emerge in the twelfth century against banditry and other rural criminals, and against the lawless nobility or to support one or another claimant to a crown.
These organizations were intended to be temporary, but became a long-standing fixture of Spain. The first recorded case of the formation of an hermandad occurred when the towns and the peasantry of the north united to police the pilgrim road to Santiago de Compostela in Galicia, and protect the pilgrims against robber knights.
Throughout the Middle Ages such alliances were frequently formed by combinations of towns to protect the roads connecting them, and were occasionally extended to political purposes. Among the most powerful was the league of North Castilian and Basque ports, the Hermandad de las marismas: Toledo, Talavera, and Villarreal.
As one of their first acts after end of the War of the Castilian Succession in 1479, Ferdinand II of Aragon and Isabella I of Castile established the centrally-organized and efficient Holy Brotherhood as a national police force. They adapted an existing brotherhood to the purpose of a general police acting under officials appointed by themselves, and endowed with great powers of summary jurisdiction even in capital cases. The original brotherhoods continued to serve as modest local police-units until their final suppression in 1835.
The Vehmic courts of Germany provided some policing in the absence of strong state institutions. Such courts had a chairman who presided over a session and lay judges who passed judgement and carried out law enforcement tasks. Among the responsibilities that lay judges had were giving formal warnings to known troublemakers, issuing warrants, and carrying out executions.
In the medieval Islamic Caliphates, police were known as Shurta. Bodies termed Shurta existed perhaps as early as the Rashidun Caliphate during the reign of Uthman. The Shurta is known to have existed in the Abbasid and Umayyad Caliphates. Their primary roles were to act as police and internal security forces but they could also be used for other duties such as customs and tax enforcement, rubbish collection, and acting as bodyguards for governors. From the 10th century, the importance of the Shurta declined as the army assumed internal security tasks while cities became more autonomous and handled their own policing needs locally, such as by hiring watchmen. In addition, officials called muhtasibs were responsible for supervising bazaars and economic activity in general in the medieval Islamic world.
In France during the Middle Ages, there were two Great Officers of the Crown of France with police responsibilities: The Marshal of France and the Grand Constable of France. The military policing responsibilities of the Marshal of France were delegated to the Marshal's provost, whose force was known as the Marshalcy because its authority ultimately derived from the Marshal. The marshalcy dates back to the Hundred Years' War, and some historians trace it back to the early 12th century. Another organisation, the Constabulary (Old French: Connétablie), was under the command of the Constable of France. The constabulary was regularised as a military body in 1337. Under Francis I (reigned 1515–1547), the Maréchaussée was merged with the constabulary. The resulting force was also known as the Maréchaussée, or, formally, the Constabulary and Marshalcy of France.
In late medieval Italian cities, police forces were known as berovierri. Individually, their members were known as birri. Subordinate to the city's podestà, the berovierri were responsible for guarding the cities and their suburbs, patrolling, and the pursuit and arrest of criminals. They were typically hired on short-term contracts, usually six months. Detailed records from medieval Bologna show that birri had a chain of command, with constables and sergeants managing lower-ranking birri, that they wore uniforms, that they were housed together with other employees of the podestà together with a number of servants including cooks and stable-keepers, that their parentage and places of origin were meticulously recorded, and that most were not native to Bologna, with many coming from outside Italy.
The English system of maintaining public order since the Norman conquest was a private system of tithings known as the mutual pledge system. This system was introduced under Alfred the Great. Communities were divided into groups of ten families called tithings, each of which was overseen by a chief tithingman. Every household head was responsible for the good behavior of his own family and the good behavior of other members of his tithing. Every male aged 12 and over was required to participate in a tithing. Members of tithings were responsible for raising "hue and cry" upon witnessing or learning of a crime, and the men of his tithing were responsible for capturing the criminal. The person the tithing captured would then be brought before the chief tithingman, who would determine guilt or innocence and punishment. All members of the criminal's tithing would be responsible for paying the fine. A group of ten tithings was known as a "hundred" and every hundred was overseen by an official known as a reeve. Hundreds ensured that if a criminal escaped to a neighboring village, he could be captured and returned to his village. If a criminal was not apprehended, then the entire hundred could be fined. The hundreds were governed by administrative divisions known as shires, the rough equivalent of a modern county, which were overseen by an official known as a shire-reeve, from which the term sheriff evolved. The shire-reeve had the power of posse comitatus, meaning he could gather the men of his shire to pursue a criminal. Following the Norman conquest of England in 1066, the tithing system was tightened with the frankpledge system. By the end of the 13th century, the office of constable developed. Constables had the same responsibilities as chief tithingmen and additionally as royal officers. The constable was elected by his parish every year. Eventually, constables became the first 'police' official to be tax-supported. In urban areas, watchmen were tasked with keeping order and enforcing nighttime curfew. Watchmen guarded the town gates at night, patrolled the streets, arrested those on the streets at night without good reason, and also acted as firefighters. Eventually the office of justice of the peace was established, with a justice of the peace overseeing constables. There was also a system of investigative "juries".
The Assize of Arms of 1252, which required the appointment of constables to summon men to arms, quell breaches of the peace, and to deliver offenders to the sheriff or reeve, is cited as one of the earliest antecedents of the English police. The Statute of Winchester of 1285 is also cited as the primary legislation regulating the policing of the country between the Norman Conquest and the Metropolitan Police Act 1829.
From about 1500, private watchmen were funded by private individuals and organisations to carry out police functions. They were later nicknamed 'Charlies', probably after the reigning monarch King Charles II. Thief-takers were also rewarded for catching thieves and returning the stolen property. They were private individuals usually hired by crime victims.
The earliest English use of the word police seems to have been the term Polles mentioned in the book The Second Part of the Institutes of the Lawes of England published in 1642.
=== Early modern ===
The first example of a statutory police force in the world was probably the High Constables of Edinburgh, formed in 1611 to police the streets of Edinburgh, then part of the Kingdom of Scotland. The constables, of whom half were merchants and half were craftsmen, were charged with enforcing 16 regulations relating to curfews, weapons, and theft. At that time, maintenance of public order in Scotland was mainly done by clan chiefs and feudal lords. The first centrally organised and uniformed police force was created by the government of King Louis XIV in 1667 to police the city of Paris, then the largest city in Europe. The royal edict, registered by the Parlement of Paris on March 15, 1667, created the office of lieutenant général de police ("lieutenant general of police"), who was to be the head of the new Paris police force, and defined the task of the police as "ensuring the peace and quiet of the public and of private individuals, purging the city of what may cause disturbances, procuring abundance, and having each and everyone live according to their station and their duties".
This office was first held by Gabriel Nicolas de la Reynie, who had 44 commissaires de police ('police commissioners') under his authority. In 1709, these commissioners were assisted by inspecteurs de police ('police inspectors'). The city of Paris was divided into 16 districts policed by the commissaires, each assigned to a particular district and assisted by a growing bureaucracy. The scheme of the Paris police force was extended to the rest of France by a royal edict of October 1699, resulting in the creation of lieutenants general of police in all large French cities and towns.
After the French Revolution, Napoléon I reorganized the police in Paris and other cities with more than 5,000 inhabitants on February 17, 1800, as the Prefecture of Police. On March 12, 1829, a government decree created the first uniformed police in France, known as sergents de ville ('city sergeants'), which the Paris Prefecture of Police's website claims were the first uniformed policemen in the world.
In feudal Japan, samurai warriors were charged with enforcing the law among commoners. Some Samurai acted as magistrates called Machi-bugyō, who acted as judges, prosecutors, and as chief of police. Beneath them were other Samurai serving as yoriki, or assistant magistrates, who conducted criminal investigations, and beneath them were Samurai serving as dōshin, who were responsible for patrolling the streets, keeping the peace, and making arrests when necessary. The yoriki were responsible for managing the dōshin. Yoriki and dōshin were typically drawn from low-ranking samurai families. Assisting the dōshin were the komono, non-Samurai chōnin who went on patrol with them and provided assistance, the okappiki, non-Samurai from the lowest outcast class, often former criminals, who worked for them as informers and spies, and gōyokiki or meakashi, chōnin, often former criminals, who were hired by local residents and merchants to work as police assistants in a particular neighborhood. This system typically did not apply to the Samurai themselves. Samurai clans were expected to resolve disputes among each other through negotiation, or when that failed through duels. Only rarely did Samurai bring their disputes to a magistrate or answer to police.
In Joseon-era Korea, the Podocheong emerged as a police force with the power to arrest and punish criminals. Established in 1469 as a temporary organization, its role solidified into a permanent one.
In Sweden, local governments were responsible for law and order by way of a royal decree issued by Magnus III in the 13th century. The cities financed and organized groups of watchmen who patrolled the streets. In the late 1500s in Stockholm, patrol duties were in large part taken over by a special corps of salaried city guards. The city guard was organized, uniformed and armed like a military unit and was responsible for interventions against various crimes and the arrest of suspected criminals. These guards were assisted by the military, fire patrolmen, and a civilian unit that did not wear a uniform, but instead wore a small badge around the neck. The civilian unit monitored compliance with city ordinances relating to e.g. sanitation issues, traffic and taxes. In rural areas, the King's bailiffs were responsible for law and order until the establishment of counties in the 1630s.
Up to the early 18th century, the level of state involvement in law enforcement in Britain was low. Although some law enforcement officials existed in the form of constables and watchmen, there was no organized police force. A professional police force like the one already present in France would have been ill-suited to Britain, which saw examples such as the French one as a threat to the people's liberty and balanced constitution in favor of an arbitrary and tyrannical government. Law enforcement was mostly up to the private citizens, who had the right and duty to prosecute crimes in which they were involved or in which they were not. At the cry of 'murder!' or 'stop thief!' everyone was entitled and obliged to join the pursuit. Once the criminal had been apprehended, the parish constables and night watchmen, who were the only public figures provided by the state and who were typically part-time and local, would make the arrest. As a result, the state set a reward to encourage citizens to arrest and prosecute offenders. The first of such rewards was established in 1692 of the amount of £40 for the conviction of a highwayman and in the following years it was extended to burglars, coiners and other forms of offense. The reward was to be increased in 1720 when, after the end of the War of the Spanish Succession and the consequent rise of criminal offenses, the government offered £100 for the conviction of a highwayman. Although the offer of such a reward was conceived as an incentive for the victims of an offense to proceed to the prosecution and to bring criminals to justice, the efforts of the government also increased the number of private thief-takers. Thief-takers became infamously known not so much for what they were supposed to do, catching real criminals and prosecuting them, as for "setting themselves up as intermediaries between victims and their attackers, extracting payments for the return of stolen goods and using the threat of prosecution to keep offenders in thrall". Some of them, such as Jonathan Wild, became infamous at the time for staging robberies in order to receive the reward.
In 1737, George II began paying some London and Middlesex watchmen with tax monies, beginning the shift to government control. In 1749, Judge Henry Fielding began organizing a force of quasi-professional constables known as the Bow Street Runners. The Bow Street Runners are considered to have been Britain's first dedicated police force. They represented a formalization and regularization of existing policing methods, similar to the unofficial 'thief-takers'. What made them different was their formal attachment to the Bow Street magistrates' office, and payment by the magistrate with funds from the central government. They worked out of Fielding's office and court at No. 4 Bow Street, and did not patrol but served writs and arrested offenders on the authority of the magistrates, travelling nationwide to apprehend criminals. Fielding wanted to regulate and legalize law enforcement activities due to the high rate of corruption and mistaken or malicious arrests seen with the system that depended mainly on private citizens and state rewards for law enforcement. Henry Fielding's work was carried on by his brother, Justice John Fielding, who succeeded him as magistrate in the Bow Street office. Under John Fielding, the institution of the Bow Street Runners gained more and more recognition from the government, although the force was only funded intermittently in the years that followed. In 1763, the Bow Street Horse Patrol was established to combat highway robbery, funded by a government grant. The Bow Street Runners served as the guiding principle for the way that policing developed over the next 80 years. Bow Street was a manifestation of the move towards increasing professionalisation and state control of street life, beginning in London.
The Macdaniel affair, a 1754 British political scandal in which a group of thief-takers was found to be falsely prosecuting innocent men in order to collect reward money from bounties, added further impetus for a publicly salaried police force that did not depend on rewards. Nonetheless, In 1828, there were privately financed police units in no fewer than 45 parishes within a 10-mile radius of London.
The word police was borrowed from French into the English language in the 18th century, but for a long time it applied only to French and continental European police forces. The word, and the concept of police itself, were "disliked as a symbol of foreign oppression". Before the 19th century, the first use of the word police recorded in government documents in the United Kingdom was the appointment of Commissioners of Police for Scotland in 1714 and the creation of the Marine Police in 1798.
=== Modern ===
==== Scotland and Ireland ====
Following early police forces established in 1779 and 1788 in Glasgow, Scotland, the Glasgow authorities successfully petitioned the government to pass the Glasgow Police Act establishing the City of Glasgow Police in 1800. Other Scottish towns soon followed suit and set up their own police forces through acts of parliament. In Ireland, the Irish Constabulary Act 1822 marked the beginning of the Royal Irish Constabulary. The act established a force in each barony with chief constables and inspectors general under the control of the civil administration at Dublin Castle. By 1841 this force numbered over 8,600 men.
==== London ====
In 1797, Patrick Colquhoun was able to persuade the West Indies merchants who operated at the Pool of London on the River Thames to establish a police force at the docks to prevent rampant theft that was causing annual estimated losses of £500,000 worth of cargo in imports alone. The idea of a police, as it then existed in France, was considered as a potentially undesirable foreign import. In building the case for the police in the face of England's firm anti-police sentiment, Colquhoun framed the political rationale on economic indicators to show that a police dedicated to crime prevention was "perfectly congenial to the principle of the British constitution". Moreover, he went so far as to praise the French system, which had reached "the greatest degree of perfection" in his estimation.
With the initial investment of £4,200, the new force the Marine Police began with about 50 men charged with policing 33,000 workers in the river trades, of whom Colquhoun claimed 11,000 were known criminals and "on the game". The force was part funded by the London Society of West India Planters and Merchants. The force was a success after its first year, and his men had "established their worth by saving £122,000 worth of cargo and by the rescuing of several lives". Word of this success spread quickly, and the government passed the Depredations on the Thames Act 1800 on 28 July 1800, establishing a fully funded police force the Thames River Police together with new laws including police powers; now the oldest police force in the world. Colquhoun published a book on the experiment, The Commerce and Policing of the River Thames. It found receptive audiences far outside London, and inspired similar forces in other cities, notably, New York City, Dublin, and Sydney.
Colquhoun's utilitarian approach to the problem – using a cost-benefit argument to obtain support from businesses standing to benefit – allowed him to achieve what Henry and John Fielding failed for their Bow Street detectives. Unlike the stipendiary system at Bow Street, the river police were full-time, salaried officers prohibited from taking private fees. His other contribution was the concept of preventive policing; his police were to act as a highly visible deterrent to crime by their permanent presence on the Thames.
==== Metropolitan ====
London was fast reaching a size unprecedented in world history, due to the onset of the Industrial Revolution. It became clear that the locally maintained system of volunteer constables and "watchmen" was ineffective, both in detecting and preventing crime. A parliamentary committee was appointed to investigate the system of policing in London. Upon Sir Robert Peel being appointed as Home Secretary in 1822, he established a second and more effective committee, and acted upon its findings.
Royal assent to the Metropolitan Police Act 1829 was given and the Metropolitan Police Service was established on September 29, 1829, in London. Peel, widely regarded as the father of modern policing, was heavily influenced by the social and legal philosophy of Jeremy Bentham, who called for a strong and centralised, but politically neutral, police force for the maintenance of social order, for the protection of people from crime and to act as a visible deterrent to urban crime and disorder. Peel decided to standardise the police force as an official paid profession, to organise it in a civilian fashion, and to make it answerable to the public.
Due to public fears concerning the deployment of the military in domestic matters, Peel organised the force along civilian lines, rather than paramilitary. To appear neutral, the uniform was deliberately manufactured in blue, rather than red which was then a military colour, along with the officers being armed only with a wooden truncheon and a rattle to signal the need for assistance. Along with this, police ranks did not include military titles, with the exception of Sergeant.
To distance the new police force from the initial public view of it as a new tool of government repression, Peel publicised the so-called Peelian principles, which set down basic guidelines for ethical policing:
Whether the police are effective is not measured on the number of arrests but on the deterrence of crime.
Above all else, an effective authority figure knows trust and accountability are paramount. Hence, Peel's most often quoted principle that "The police are the public and the public are the police."
The Metropolitan Police Act 1829 created a modern police force by limiting the purview of the force and its powers and envisioning it as merely an organ of the judicial system. Their job was apolitical; to maintain the peace and apprehend criminals for the courts to process according to the law. This was very different from the "continental model" of the police force that had been developed in France, where the police force worked within the parameters of the absolutist state as an extension of the authority of the monarch and functioned as part of the governing state.
In 1863, the Metropolitan Police were issued with the distinctive custodian helmet, and in 1884 they switched to the use of whistles that could be heard from much further away. The Metropolitan Police became a model for the police forces in many countries, including the United States and most of the British Empire. Bobbies can still be found in many parts of the Commonwealth of Nations.
==== Australia ====
In Australia, organized law enforcement emerged soon after British colonization began in 1788. The first law enforcement organizations were the Night Watch and Row Boat Guard, which were formed in 1789 to police Sydney. Their ranks were drawn from well-behaved convicts deported to Australia. The Night Watch was replaced by the Sydney Foot Police in 1790. In New South Wales, rural law enforcement officials were appointed by local justices of the peace during the early to mid-19th century and were referred to as "bench police" or "benchers". A mounted police force was formed in 1825.
The first police force having centralised command as well as jurisdiction over an entire colony was the South Australia Police, formed in 1838 under Henry Inman. However, whilst the New South Wales Police Force was established in 1862, it was made up from a large number of policing and military units operating within the then Colony of New South Wales and traces its links back to the Royal Marines. The passing of the Police Regulation Act of 1862 essentially tightly regulated and centralised all of the police forces operating throughout the Colony of New South Wales.
Each Australian state and territory maintain its own police force, while the Australian Federal Police enforces laws at the federal level. The New South Wales Police Force remains the largest police force in Australia in terms of personnel and physical resources. It is also the only police force that requires its recruits to undertake university studies at the recruit level and has the recruit pay for their own education.
==== Brazil ====
In 1566, the first police investigator of Rio de Janeiro was recruited. By the 17th century, most captaincies already had local units with law enforcement functions. On July 9, 1775, a Cavalry Regiment was created in the state of Minas Gerais for maintaining law and order. In 1808, the Portuguese royal family relocated to Brazil, because of the French invasion of Portugal. King João VI established the Intendência Geral de Polícia ('General Police Intendancy') for investigations. He also created a Royal Police Guard for Rio de Janeiro in 1809. In 1831, after independence, each province started organizing its local "military police", with order maintenance tasks. The Federal Railroad Police was created in 1852, Federal Highway Police, was established in 1928, and Federal Police in 1967.
==== Canada ====
During the early days of English and French colonization, municipalities hired watchmen and constables to provide security. Established in 1729, the Royal Newfoundland Constabulary (RNC) was the first policing service founded in Canada. The establishment of modern policing services in the Canadas occurred during the 1830s, modelling their services after the London Metropolitan Police, and adopting the ideas of the Peelian principles. The Toronto Police Service was established in 1834 as the first municipal police service in Canada. Prior to that, local able-bodied male citizens had been required to report for night watch duty as special constables for a fixed number of nights a year on penalty of a fine or imprisonment in a system known as "watch and ward." The Quebec City Police Service was established in 1840.
A national police service, the Dominion Police, was founded in 1868. Initially the Dominion Police provided security for parliament, but its responsibilities quickly grew. In 1870, Rupert's Land and the North-Western Territory were incorporated into the country. In an effort to police its newly acquired territory, the Canadian government established the North-West Mounted Police in 1873 (renamed Royal North-West Mounted Police in 1904). In 1920, the Dominion Police, and the Royal Northwest Mounted Police were amalgamated into the Royal Canadian Mounted Police (RCMP).
The RCMP provides federal law enforcement; and law enforcement in eight provinces, and all three territories. The provinces of Ontario, and Quebec maintain their own provincial police forces, the Ontario Provincial Police (OPP), and the Sûreté du Québec (SQ). Policing in Newfoundland and Labrador is provided by the RCMP, and the RNC. The aforementioned services also provide municipal policing, although larger Canadian municipalities may establish their own police service.
==== Lebanon ====
In Lebanon, the current police force was established in 1861, with creation of the Gendarmerie.
==== India ====
Under the Mughal Empire, provincial governors called subahdars (or nazims), as well as officials known as faujdars and thanadars were tasked with keeping law and order. Kotwals were responsible for public order in urban areas. In addition, officials called amils, whose primary duties were tax collection, occasionally dealt with rebels. The system evolved under growing British influence that eventually culminated in the establishment of the British Raj. In 1770, the offices of faujdar and amil were abolished. They were brought back in 1774 by Warren Hastings, the first Governor of the Presidency of Fort William (Bengal). In 1791, the first permanent police force was established by Charles Cornwallis, the Commander-in-Chief of British India and Governor of the Presidency of Fort William.
A single police force was established after the formation of the British Raj with the Government of India Act 1858. A uniform police bureaucracy was formed under the Police Act 1861, which established the Superior Police Services. This later evolved into the Indian Imperial Police, which kept order until the Partition of India and independence in 1947. In 1948, the Indian Imperial Police was replaced by the Indian Police Service.
In modern India, the police are under the control of respective States and union territories and are known to be under State Police Services (SPS). The candidates selected for the SPS are usually posted as Deputy Superintendent of Police or Assistant Commissioner of Police once their probationary period ends. On prescribed satisfactory service in the SPS, the officers are nominated to the Indian Police Service. The service color is usually dark blue and red, while the uniform color is Khaki.
==== United States ====
In Colonial America, the county sheriff was the most important law enforcement official. For instance, the New York Sheriff's Office was founded in 1626, and the Albany County Sheriff's Department in the 1660s. The county sheriff, who was an elected official, was responsible for enforcing laws, collecting taxes, supervising elections, and handling the legal business of the county government. Sheriffs would investigate crimes and make arrests after citizens filed complaints or provided information about a crime but did not carry out patrols or otherwise take preventive action. Villages and cities typically hired constables and marshals, who were empowered to make arrests and serve warrants. Many municipalities also formed a night watch, a group of citizen volunteers who would patrol the streets at night looking for crime and fires. Typically, constables and marshals were the main law enforcement officials available during the day while the night watch would serve during the night. Eventually, municipalities formed day watch groups. Rioting was handled by local militias.
In the 1700s, the Province of Carolina (later North- and South Carolina) established slave patrols in order to prevent slave rebellions and enslaved people from escaping. By 1785 the Charleston Guard and Watch had "a distinct chain of command, uniforms, sole responsibility for policing, salary, authorized use of force, and a focus on preventing crime."
In 1751 moves towards a municipal police service in Philadelphia were made when the city's night watchmen and constables began receiving wages and a Board of Wardens was created to oversee the night watch.
In 1789 the United States Marshals Service was established, followed by other federal services such as the U.S. Parks Police (1791) and U.S. Mint Police (1792). Municipal police services were created in Richmond, Virginia in 1807, Boston in 1838, and New York City in 1845. The United States Secret Service was founded in 1865 and was for some time the main investigative body for the federal government.
Modern policing influenced by the British model of policing established in 1829 based on the Peelian principles began emerging in the United States in the mid-19th century, replacing previous law enforcement systems based primarily on night watch organizations. Cities began establishing organized, publicly funded, full-time professional police services. In Boston, a day police consisting of six officers under the command of the city marshal was established in 1838 to supplement the city's night watch. This paved the way for the establishment of the Boston Police Department in 1854. In New York City, law enforcement up to the 1840s was handled by a night watch as well as city marshals, municipal police officers, and constables. In 1845, the New York City Police Department was established. In Philadelphia, the first police officers to patrol the city in daytime were employed in 1833 as a supplement to the night watch system, leading to the establishment of the Philadelphia Police Department in 1854.
In the American Old West, law enforcement was carried out by local sheriffs, rangers, constables, and federal marshals. There were also town marshals responsible for serving civil and criminal warrants, maintaining the jails, and carrying out arrests for petty crime.
In addition to federal, state, and local forces, some special districts have been formed to provide extra police protection in designated areas. These districts may be known as neighborhood improvement districts, crime prevention districts, or security districts.
In 2022, San Francisco supervisors approved a policy allowing municipal police (San Francisco Police Department) to use robots for various law enforcement and emergency operations, permitting their employment as a deadly force option in cases where the "risk of life to members of the public or officers is imminent and outweighs any other force option available to SFPD." This policy has been criticized by groups such as the Electronic Frontier Foundation and the ACLU, who have argued that "killer robots will not make San Francisco better" and "police might even bring armed robots to a protest."
== Development of theory ==
Michel Foucault wrote that the contemporary concept of police as a paid and funded functionary of the state was developed by German and French legal scholars and practitioners in public administration and statistics in the 17th and early 18th centuries, most notably with Nicolas Delamare's Traité de la Police ("Treatise on the Police"), first published in 1705. The German Polizeiwissenschaft (Science of Police) first theorized by Philipp von Hörnigk, a 17th-century Austrian political economist and civil servant, and much more famously by Johann Heinrich Gottlob Justi, who produced an important theoretical work known as Cameral science on the formulation of police. Foucault cites Magdalene Humpert author of Bibliographie der Kameralwissenschaften (1937) in which the author makes note of a substantial bibliography was produced of over 4,000 pieces of the practice of Polizeiwissenschaft. However, this may be a mistranslation of Foucault's own work since the actual source of Magdalene Humpert states over 14,000 items were produced from the 16th century dates ranging from 1520 to 1850.
As conceptualized by the Polizeiwissenschaft, according to Foucault the police had an administrative, economic and social duty ("procuring abundance"). It was in charge of demographic concerns and needed to be incorporated within the western political philosophy system of raison d'état and therefore giving the superficial appearance of empowering the population (and unwittingly supervising the population), which, according to mercantilist theory, was to be the main strength of the state. Thus, its functions largely overreached simple law enforcement activities and included public health concerns, urban planning (which was important because of the miasma theory of disease; thus, cemeteries were moved out of town, etc.), and surveillance of prices.
The concept of preventive policing, or policing to deter crime from taking place, gained influence in the late 18th century. Police Magistrate John Fielding, head of the Bow Street Runners, argued that "...it is much better to prevent even one man from being a rogue than apprehending and bringing forty to justice."
The Utilitarian philosopher, Jeremy Bentham, promoted the views of Italian Marquis Cesare Beccaria, and disseminated a translated version of "Essay on Crime in Punishment". Bentham espoused the guiding principle of "the greatest good for the greatest number":
It is better to prevent crimes than to punish them. This is the chief aim of every good system of legislation, which is the art of leading men to the greatest possible happiness or to the least possible misery, according to calculation of all the goods and evils of life.
Patrick Colquhoun's influential work, A Treatise on the Police of the Metropolis (1797) was heavily influenced by Benthamite thought. Colquhoun's Thames River Police was founded on these principles, and in contrast to the Bow Street Runners, acted as a deterrent by their continual presence on the riverfront, in addition to being able to intervene if they spotted a crime in progress.
Edwin Chadwick's 1829 article, "Preventive police" in the London Review, argued that prevention ought to be the primary concern of a police body, which was not the case in practice. The reason, argued Chadwick, was that "A preventive police would act more immediately by placing difficulties in obtaining the objects of temptation." In contrast to a deterrent of punishment, a preventive police force would deter criminality by making crime cost-ineffective – "crime doesn't pay". In the second draft of his 1829 Police Act, the "object" of the new Metropolitan Police, was changed by Robert Peel to the "principal object," which was the "prevention of crime." Later historians would attribute the perception of England's "appearance of orderliness and love of public order" to the preventive principle entrenched in Peel's police system.
Development of modern police forces around the world was contemporary to the formation of the state, later defined by sociologist Max Weber as achieving a "monopoly on the legitimate use of physical force" and which was primarily exercised by the police and the military. Marxist theory situates the development of the modern state as part of the rise of capitalism, in which the police are one component of the bourgeoisie's repressive apparatus for subjugating the working class. By contrast, the Peelian principles argue that "the power of the police ... is dependent on public approval of their existence, actions and behavior", a philosophy known as policing by consent.
== Personnel and organization ==
Police forces include both preventive (uniformed) police and detectives. Terminology varies from country to country. Police functions include protecting life and property, enforcing criminal law, criminal investigations, regulating traffic, crowd control, public safety duties, civil defense, emergency management, searching for missing persons, lost property and other duties concerned with public order. Regardless of size, police forces are generally organized as a hierarchy with multiple ranks. The exact structures and the names of rank vary considerably by country.
=== Uniformed ===
The police who wear uniforms make up the majority of a police service's personnel. Their main duty is to respond to calls for service. When not responding to these calls, they do work aimed at preventing crime, such as patrols. The uniformed police are known by varying names such as preventive police, the uniform branch/division, administrative police, order police, the patrol bureau/division, or patrol. In Australia and the United Kingdom, patrol personnel are also known as "general duties" officers. Atypically, Brazil's preventive police are known as Military Police.
As stated by the name, uniformed police wear uniforms. They perform functions that require an immediate recognition of an officer's legal authority and a potential need for force. Most commonly this means intervening to stop a crime in progress and securing the scene of a crime that has already happened. Besides dealing with crime, these officers may also manage and monitor traffic, carry out community policing duties, maintain order at public events or carry out searches for missing people (in 2012, the latter accounted for 14% of police time in the United Kingdom). As most of these duties must be available as a 24/7 service, uniformed police are required to do shift work.
=== Detectives ===
Police detectives are responsible for investigations and detective work. Detectives may be called Investigations Police, Judiciary/Judicial Police, or Criminal Police. In the United Kingdom, they are often referred to by the name of their department, the Criminal Investigation Department. Detectives typically make up roughly 15–25% of a police service's personnel.
Detectives, in contrast to uniformed police, typically wear business-styled attire in bureaucratic and investigative functions, where a uniformed presence would be either a distraction or intimidating but a need to establish police authority still exists. "Plainclothes" officers dress in attire consistent with that worn by the general public for purposes of blending in.
In some cases, police are assigned to work "undercover", where they conceal their police identity to investigate crimes, such as organized crime or narcotics crime, that are unsolvable by other means. In some cases, this type of policing shares aspects with espionage.
The relationship between detective and uniformed branches varies by country. In the United States, there is high variation within the country itself. Many American police departments require detectives to spend some time on temporary assignments in the patrol division. The argument is that rotating officers helps the detectives to better understand the uniformed officers' work, to promote cross-training in a wider variety of skills, and prevent "cliques" that can contribute to corruption or other unethical behavior. Conversely, some countries regard detective work as being an entirely separate profession, with detectives working in separate agencies and recruited without having to serve in uniform. A common compromise in English-speaking countries is that most detectives are recruited from the uniformed branch, but once qualified they tend to spend the rest of their careers in the detective branch.
Another point of variation is whether detectives have extra status. In some forces, such as the New York Police Department and Philadelphia Police Department, a regular detective holds a higher rank than a regular police officer. In others, such as British police and Canadian police, a regular detective has equal status with regular uniformed officers. Officers still have to take exams to move to the detective branch, but the move is regarded as being a specialization, rather than a promotion.
=== Volunteers and auxiliary ===
Police services often include part-time or volunteer officers, some of whom have other jobs outside policing. These may be paid positions or entirely volunteer. These are known by a variety of names, such as reserves, auxiliary police or special constables.
Other volunteer organizations work with the police and perform some of their duties. Groups in the U.S. including the Retired and Senior Volunteer Program, Community Emergency Response Team, and the Boy Scouts Police Explorers provide training, traffic and crowd control, disaster response, and other policing duties. In the U.S., the Volunteers in Police Service program assists over 200,000 volunteers in almost 2,000 programs. Volunteers may also work on the support staff. Examples of these schemes are Volunteers in Police Service in the US, Police Support Volunteers in the UK and Volunteers in Policing in New South Wales.
=== Specialized ===
Specialized preventive and detective groups, or Specialist Investigation Departments, exist within many law enforcement organizations either for dealing with particular types of crime, such as traffic law enforcement, K9/use of police dogs, crash investigation, homicide, or fraud; or for situations requiring specialized skills, such as underwater search, aviation, explosive disposal ("bomb squad"), and computer crime.
Most larger jurisdictions employ police tactical units, specially selected and trained paramilitary units with specialized equipment, weapons, and training, for the purposes of dealing with particularly violent situations beyond the capability of a patrol officer response, including standoffs, counterterrorism, and rescue operations.
In counterinsurgency-type campaigns, select and specially trained units of police armed and equipped as light infantry have been designated as police field forces who perform paramilitary-type patrols and ambushes whilst retaining their police powers in areas that were highly dangerous.
Because their situational mandate typically focuses on removing innocent bystanders from dangerous people and dangerous situations, not violent resolution, they are often equipped with non-lethal tactical tools like chemical agents, stun grenades, and rubber bullets. The Specialist Firearms Command (MO19) of the Metropolitan Police in London is a group of armed police used in dangerous situations including hostage taking, armed robbery/assault and terrorism.
=== Administrative duties ===
Police may have administrative duties that are not directly related to enforcing the law, such as issuing firearms licenses. The extent that police have these functions varies among countries, with police in France, Germany, and other continental European countries handling such tasks to a greater extent than British counterparts.
=== Military ===
Military police may refer to:
a section of the military solely responsible for policing the armed forces, referred to as provosts (e.g., United States Air Force Security Forces)
a section of the military responsible for policing in both the armed forces and in the civilian population (e.g., most gendarmeries, such as the French Gendarmerie, the Italian Carabinieri, the Spanish Guardia Civil, and the Portuguese National Republican Guard)
a section of the military solely responsible for policing the civilian population (e.g., Romanian Gendarmerie)
the civilian preventive police of a Brazilian state (e.g., Policia Militar)
a special military law enforcement service (e.g., Russian Military Police)
=== Religious ===
Some jurisdictions with religious laws may have dedicated religious police to enforce said laws. These religious police forces, which may operate either as a unit of a wider police force or as an independent agency, may only have jurisdiction over members of said religion, or they may have the ability to enforce religious customs nationwide regardless of individual religious beliefs.
Religious police may enforce social norms, gender roles, dress codes, and dietary laws per religious doctrine and laws, and may also prohibit practices that run contrary to said doctrine, such as atheism, proselytism, homosexuality, socialization between different genders, business operations during religious periods or events such as salah or the Sabbath, or the sale and possession of "offending material" ranging from pornography to foreign media.
Forms of religious law enforcement were relatively common in historical religious civilizations, but eventually declined in favor of religious tolerance and pluralism. One of the most common forms of religious police in the modern world are Islamic religious police, which enforce the application of Sharia (Islamic religious law). As of 2018, there are eight Islamic countries that maintain Islamic religious police: Afghanistan, Iran, Iraq, Mauritania, Pakistan, Saudi Arabia, Sudan, and Yemen.
Some forms of religious police may not enforce religious law, but rather suppress religion or religious extremism. This is often done for ideological reasons; for example, communist states such as China and Vietnam have historically suppressed and tightly controlled religions such as Christianity.
=== Secret ===
Secret police organizations are typically used to suppress dissidents for engaging in non-politically correct communications and activities, which are deemed counter-productive to what the state and related establishment promote. Secret police interventions to stop such activities are often illegal, and are designed to debilitate, in various ways, the people targeted in order to limit or stop outright their ability to act in a non-politically correct manner. The methods employed may involve spying, various acts of deception, intimidation, framing, false imprisonment, false incarceration under mental health legislation, and physical violence. Countries widely reported to use secret police organizations include China (The Ministry of State Security) and North Korea (The Ministry of State Security).
== By country ==
Police forces are usually organized and funded by some level of government. The level of government responsible for policing varies from place to place, and may be at the national, regional or local level. Some countries have police forces that serve the same territory, with their jurisdiction depending on the type of crime or other circumstances. Other countries, such as Austria, Chile, Israel, New Zealand, the Philippines, South Africa and Sweden, have a single national police force.
In some places with multiple national police forces, one common arrangement is to have a civilian police force and a paramilitary gendarmerie, such as the Police Nationale and National Gendarmerie in France. The French policing system spread to other countries through the Napoleonic Wars and the French colonial empire. Another example is the Policía Nacional and Guardia Civil in Spain. In both France and Spain, the civilian force polices urban areas and the paramilitary force polices rural areas. Italy has a similar arrangement with the Polizia di Stato and Carabinieri, though their jurisdictions overlap more. Some countries have separate agencies for uniformed police and detectives, such as the Military Police and Civil Police in Brazil and the Carabineros and Investigations Police in Chile.
Other countries have sub-national police forces, but for the most part their jurisdictions do not overlap. In many countries, especially federations, there may be two or more tiers of police force, each serving different levels of government and enforcing different subsets of the law. In Australia and Germany, the majority of policing is carried out by state (i.e. provincial) police forces, which are supplemented by a federal police force. Though not a federation, the United Kingdom has a similar arrangement, where policing is primarily the responsibility of a regional police force and specialist units exist at the national level. In Canada, the Royal Canadian Mounted Police (RCMP) are the federal police, while municipalities can decide whether to run a local police service or to contract local policing duties to a larger one. Most urban areas have a local police service, while most rural areas contract it to the RCMP, or to the provincial police in Ontario and Quebec.
The United States has a highly decentralized and fragmented system of law enforcement, with over 17,000 state and local law enforcement agencies. These agencies include local police, county law enforcement (often in the form of a sheriff's office, or county police), state police and federal law enforcement agencies. Federal agencies, such as the FBI, only have jurisdiction over federal crimes or those that involve more than one state. Other federal agencies have jurisdiction over a specific type of crime. Examples include the Federal Protective Service, which patrols and protects government buildings; the Postal Inspection Service, which protect United States Postal Service facilities, vehicles and items; the Park Police, which protect national parks; and Amtrak Police, which patrol Amtrak stations and trains. There are also some government agencies and uniformed services that perform police functions in addition to other duties, such as the Coast Guard.
== International ==
Most countries are members of the International Criminal Police Organization (Interpol), established to detect and fight transnational crime and provide for international co-operation and co-ordination of other police activities, such as notifying relatives of the death of foreign nationals. Interpol does not conduct investigations or arrests by itself, but only serves as a central point for information on crime, suspects and criminals. Political crimes are excluded from its competencies.
The terms international policing, transnational policing, and/or global policing began to be used from the early 1990s onwards to describe forms of policing that transcended the boundaries of the sovereign nation-state. These terms refer in variable ways to practices and forms for policing that, in some sense, transcend national borders. This includes a variety of practices, but international police cooperation, criminal intelligence exchange between police agencies working in different nation-states, and police development-aid to weak, failed or failing states are the three types that have received the most scholarly attention.
Historical studies reveal that policing agents have undertaken a variety of cross-border police missions for many years. For example, in the 19th century a number of European policing agencies undertook cross-border surveillance because of concerns about anarchist agitators and other political radicals. A notable example of this was the occasional surveillance by Prussian police of Karl Marx during the years he remained resident in London. The interests of public police agencies in cross-border co-operation in the control of political radicalism and ordinary law crime were primarily initiated in Europe, which eventually led to the establishment of Interpol before World War II. There are also many interesting examples of cross-border policing under private auspices and by municipal police forces that date back to the 19th century. It has been established that modern policing has transgressed national boundaries from time to time almost from its inception. It is also generally agreed that in the post–Cold War era this type of practice became more significant and frequent.
Few empirical works on the practices of inter/transnational information and intelligence sharing have been undertaken. A notable exception is James Sheptycki's study of police cooperation in the English Channel region, which provides a systematic content analysis of information exchange files and a description of how these transnational information and intelligence exchanges are transformed into police casework. The study showed that transnational police information sharing was routinized in the cross-Channel region from 1968 on the basis of agreements directly between the police agencies and without any formal agreement between the countries concerned. By 1992, with the signing of the Schengen Treaty, which formalized aspects of police information exchange across the territory of the European Union, there were worries that much, if not all, of this intelligence sharing was opaque, raising questions about the efficacy of the accountability mechanisms governing police information sharing in Europe.
Studies of this kind outside of Europe are even rarer, so it is difficult to make generalizations, but one small-scale study that compared transnational police information and intelligence sharing practices at specific cross-border locations in North America and Europe confirmed that the low visibility of police information and intelligence sharing was a common feature. Intelligence-led policing is now common practice in most advanced countries and it is likely that police intelligence sharing and information exchange has a common morphology around the world. James Sheptycki has analyzed the effects of the new information technologies on the organization of policing-intelligence and suggests that a number of "organizational pathologies" have arisen that make the functioning of security-intelligence processes in transnational policing deeply problematic. He argues that transnational police information circuits help to "compose the panic scenes of the security-control society". The paradoxical effect is that, the harder policing agencies work to produce security, the greater are feelings of insecurity.
Police development-aid to weak, failed or failing states is another form of transnational policing that has garnered attention. This form of transnational policing plays an increasingly important role in United Nations peacekeeping and this looks set to grow in the years ahead, especially as the international community seeks to develop the rule of law and reform security institutions in states recovering from conflict. With transnational police development-aid the imbalances of power between donors and recipients are stark and there are questions about the applicability and transportability of policing models between jurisdictions.
One topic concerns making transnational policing institutions democratically accountable. According to the Global Accountability Report for 2007, Interpol had the lowest scores in its category (IGOs), coming in tenth with a score of 22% on overall accountability capabilities.
=== Overseas policing ===
A police force may establish its presence in a foreign country with or without the permission of the host state. In the case of China and the ruling Communist Party, this has involved setting up unofficial police service stations around the world, and using coercive means to influence the behaviour of members of the Chinese diaspora and especially those who hold Chinese citizenship. Political dissidents have been harassed and intimidated in a form of transnational repression and convinced to return to China. Many of these actions were illegal in the states where they occurred. Such police stations have been established in dozens of countries around the world, with some, such as the UK and the US, forcing them to close.
== Equipment ==
=== Weapons ===
In many jurisdictions, police officers carry firearms, primarily handguns, in the normal course of their duties. In the United Kingdom (except Northern Ireland), Iceland, Ireland, New Zealand, Norway, and Malta, with the exception of specialist units, officers do not carry firearms as a matter of course. New Zealand and Norwegian police carry firearms in their vehicles, but not on their duty belts, and must obtain authorization before the weapons can be removed from the vehicle unless their life or the life of others are in danger.
Police often have specialized units for handling armed offenders or dangerous situations where combat is likely, such as police tactical units or authorised firearms officers. In some jurisdictions, depending on the circumstances, police can call on the military for assistance, as military aid to the civil power is an aspect of many armed forces. Perhaps the most high-profile example of this was in 1980, when the British Army's Special Air Service was deployed to resolve the Iranian Embassy siege on behalf of the Metropolitan Police.
They can also be armed with "non-lethal" (more accurately known as "less than lethal" or "less-lethal" given that they can still be deadly) weaponry, particularly for riot control, or to inflict pain against a resistant suspect to force them to surrender without lethally wounding them. Non-lethal weapons include batons, tear gas, riot control agents, rubber bullets, riot shields, water cannons, and electroshock weapons. Police officers typically carry handcuffs to restrain suspects.
The use of firearms or deadly force is typically a last resort only to be used when necessary to save the lives of others or themselves, though some jurisdictions (such as Brazil) allow its use against fleeing felons and escaped convicts. Police officers in the United States are generally allowed to use deadly force if they believe their life is in danger, a policy that has been criticized for being vague. South African police have a "shoot-to-kill" policy, which allows officers to use deadly force against any person who poses a significant threat to them. With the country having one of the highest rates of violent crime, President Jacob Zuma stated that South Africa needs to handle crime differently from other countries.
=== Communications ===
Modern police forces make extensive use of two-way radio communications equipment, carried both on the person and installed in vehicles, to coordinate their work, share information, and get help quickly. Vehicle-installed mobile data terminals enhance the ability of police communications, enabling easier dispatching of calls, criminal background checks on persons of interest to be completed in a matter of seconds, and updating officers' daily activity log and other required reports, on a real-time basis. Other common pieces of police equipment include flashlights, whistles, police notebooks, and "ticket books" or citations. Some police departments have developed advanced computerized data display and communication systems to bring real time data to officers, one example being the NYPD's Domain Awareness System.
=== Vehicles ===
Police vehicles are used for detaining, patrolling, and transporting over wide areas that an officer could not effectively cover otherwise. The average police car used for standard patrol is a four-door sedan, SUV, or CUV, often modified by the manufacturer or police force's fleet services to provide better performance. Pickup trucks, off-road vehicles, and vans are often used in utility roles, though in some jurisdictions or situations (such as those where dirt roads are common, off-roading is required, or the nature of the officer's assignment necessitates it), they may be used as standard patrol cars. Sports cars are typically not used by police due to cost and maintenance issues, though those that are used are typically only assigned to traffic enforcement or community policing, and are rarely, if ever, assigned to standard patrol or authorized to respond to dangerous calls (such as armed calls or pursuits) where the likelihood of the vehicle being damaged or destroyed is high. Police vehicles are usually marked with appropriate symbols and equipped with sirens and flashing emergency lights to make others aware of police presence or response; in most jurisdictions, police vehicles with their sirens and emergency lights on have right of way in traffic, while in other jurisdictions, emergency lights may be kept on while patrolling to ensure ease of visibility. Unmarked or undercover police vehicles are used primarily for traffic enforcement or apprehending criminals without alerting them to their presence. The use of unmarked police vehicles for traffic enforcement is controversial, with the state of New York banning this practice in 1996 on the grounds that it endangered motorists who might be pulled over by police impersonators.
Motorcycles, having historically been a mainstay in police fleets, are commonly used, particularly in locations that a car may not be able to reach, to control potential public order situations involving meetings of motorcyclists, and often in police escorts where motorcycle police officers can quickly clear a path for escorted vehicles. Bicycle patrols are used in some areas, often downtown areas or parks, because they allow for wider and faster area coverage than officers on foot. Bicycles are also commonly used by riot police to create makeshift barricades against protesters.
Police aviation consists of helicopters and fixed-wing aircraft, while police watercraft tend to consist of RHIBs, motorboats, and patrol boats. SWAT vehicles are used by police tactical units, and often consist of four-wheeled armored personnel carriers used to transport tactical teams while providing armored cover, equipment storage space, or makeshift battering ram capabilities; these vehicles are typically not armed and do not patrol and are only used to transport. Mobile command posts may also be used by some police forces to establish identifiable command centers at the scene of major situations.
Police cars may contain issued long guns, ammunition for issued weapons, less-lethal weaponry, riot control equipment, traffic cones, road flares, physical barricades or barricade tape, fire extinguishers, first aid kits, or defibrillators.
== Strategies ==
The advent of the police car, two-way radio, and telephone in the early 20th century transformed policing into a reactive strategy that focused on responding to calls for service away from their beat. With this transformation, police command and control became more centralized.
In the United States, August Vollmer introduced other reforms, including education requirements for police officers. O.W. Wilson, a student of Vollmer, helped reduce corruption and introduce professionalism in Wichita, Kansas, and later in the Chicago Police Department. Strategies employed by O.W. Wilson included rotating officers from community to community to reduce their vulnerability to corruption, establishing of a non-partisan police board to help govern the police force, a strict merit system for promotions within the department, and an aggressive recruiting drive with higher police salaries to attract professionally qualified officers. During the professionalism era of policing, law enforcement agencies concentrated on dealing with felonies and other serious crime and conducting visible car patrols in between, rather than broader focus on crime prevention.
The Kansas City Preventive Patrol study in the early 1970s showed flaws in using visible car patrols for crime prevention. It found that aimless car patrols did little to deter crime and often went unnoticed by the public. Patrol officers in cars had insufficient contact and interaction with the community, leading to a social rift between the two. In the 1980s and 1990s, many law enforcement agencies began to adopt community policing strategies, and others adopted problem-oriented policing.
Broken windows' policing was another, related approach introduced in the 1980s by James Q. Wilson and George L. Kelling, who suggested that police should pay greater attention to minor "quality of life" offenses and disorderly conduct. The concept behind this method is simple: broken windows, graffiti, and other physical destruction or degradation of property create an environment in which crime and disorder is more likely. The presence of broken windows and graffiti sends a message that authorities do not care and are not trying to correct problems in these areas. Therefore, correcting these small problems prevents more serious criminal activity. The theory was popularised in the early 1990s by police chief William J. Bratton and New York City Mayor Rudy Giuliani. It was emulated in 2010s in Kazakhstan through zero tolerance policing. Yet it failed to produce meaningful results in this country because citizens distrusted police while state leaders preferred police loyalty over police good behavior.
Building upon these earlier models, intelligence-led policing has also become an important strategy. Intelligence-led policing and problem-oriented policing are complementary strategies, both of which involve systematic use of information. Although it still lacks a universally accepted definition, the crux of intelligence-led policing is an emphasis on the collection and analysis of information to guide police operations, rather than the reverse.
A related development is evidence-based policing. In a similar vein to evidence-based policy, evidence-based policing is the use of controlled experiments to find which methods of policing are more effective. Leading advocates of evidence-based policing include the criminologist Lawrence W. Sherman and philanthropist Jerry Lee. Findings from controlled experiments include the Minneapolis Domestic Violence Experiment, evidence that patrols deter crime if they are concentrated in crime hotspots and that restricting police powers to shoot suspects does not cause an increase in crime or violence against police officers. Use of experiments to assess the usefulness of strategies has been endorsed by many police services and institutions, including the U.S. Police Foundation and the UK College of Policing.
== Power restrictions ==
In many nations, criminal procedure law has been developed to regulate officers' discretion, so that they do not arbitrarily or unjustly exercise their powers of arrest, search and seizure, and use of force. In the United States, Miranda v. Arizona led to the widespread use of Miranda warnings or constitutional warnings.
In Miranda the court created safeguards against self-incriminating statements made after an arrest. The court held that "The prosecution may not use statements, whether exculpatory or inculpatory, stemming from questioning initiated by law enforcement officers after a person has been taken into custody or otherwise deprived of his freedom of action in any significant way, unless it demonstrates the use of procedural safeguards effective to secure the Fifth Amendment's privilege against self-incrimination"
Police in the United States are also prohibited from holding criminal suspects for more than a reasonable amount of time (usually 24–48 hours) before arraignment, using torture, abuse or physical threats to extract confessions, using excessive force to effect an arrest, and searching suspects' bodies or their homes without a warrant obtained upon a showing of probable cause. The four exceptions to the constitutional requirement of a search warrant are:
Consent
Search incident to arrest
Motor vehicle searches
Exigent circumstances
In Terry v. Ohio (1968) the court divided seizure into two parts, the investigatory stop and arrest. The court further held that during an investigatory stop a police officer's search " [is] confined to what [is] minimally necessary to determine whether [a suspect] is armed, and the intrusion, which [is] made for the sole purpose of protecting himself and others nearby, [is] confined to ascertaining the presence of weapons" (U.S. Supreme Court). Before Terry, every police encounter constituted an arrest, giving the police officer the full range of search authority. Search authority during a Terry stop (investigatory stop) is limited to weapons only.
Using deception for confessions is permitted, but not coercion. There are exceptions or exigent circumstances such as an articulated need to disarm a suspect or searching a suspect who has already been arrested (Search Incident to an Arrest). The Posse Comitatus Act severely restricts the use of the military for police activity, giving added importance to police SWAT units.
British police officers are governed by similar rules, such as those introduced to England and Wales under the Police and Criminal Evidence Act 1984 (PACE), but generally have greater powers. They may, for example, legally search any suspect who has been arrested, or their vehicles, home or business premises, without a warrant, and may seize anything they find in a search as evidence.
All police officers in the United Kingdom, whatever their actual rank, are 'constables' in terms of their legal position. This means that a newly appointed constable has the same arrest powers as a Chief Constable or Commissioner. However, certain higher ranks have additional powers to authorize certain aspects of police operations, such as a power to authorize a search of a suspect's house (section 18 PACE in England and Wales) by an officer of the rank of Inspector, or the power to authorize a suspect's detention beyond 24 hours by a Superintendent.
== Conduct, accountability and public confidence ==
Police services commonly include units for investigating crimes committed by the police themselves. These units are typically called internal affairs or inspectorate-general units. In some countries separate organizations outside the police exist for such purposes, such as the British Independent Office for Police Conduct. In the United States, due to American laws around qualified immunity, it has become increasingly difficult to investigate and charge police misconduct and crimes.
Likewise, some state and local jurisdictions, for example, Springfield, Illinois have similar outside review organizations. The Police Service of Northern Ireland is investigated by the Police Ombudsman for Northern Ireland, an external agency set up as a result of the Patten report into policing the province. In the Republic of Ireland, the Garda Síochána is investigated by Fiosrú – the Office of the Police Ombudsman, a body founded as Garda Síochána Ombudsman Commission to replace the Garda Complaints Board in May 2007, and reorganised under its new title in 2025.
The Special Investigations Unit of Ontario, Canada, is one of only a few civilian agencies around the world responsible for investigating circumstances involving police and others that have resulted in a death, serious injury, or allegations of sexual assault. The agency has made allegations of insufficient cooperation from various police services hindering their investigations.
In Hong Kong, any allegations of corruption within the police are investigated by the Independent Commission Against Corruption and the Independent Police Complaints Council, two agencies which are independent of the police force.
Police body cameras are often worn by police officers to record their interactions with the public and each other, providing audiovisual recorded evidence for review in the event an officer or agency's actions are investigated.
=== Use of force ===
Police forces also find themselves under criticism for their use of force, particularly deadly force. Specifically, tension increases when a police officer of one ethnic group harms or kills a suspect of another one. In the United States, such events occasionally spark protests and accusations of racism against police and allegations that police departments practice racial profiling. Similar incidents have also happened in other countries.
In the United States since the 1960s, concern over such issues has increasingly weighed upon law enforcement agencies, courts and legislatures at every level of government. Incidents such as the 1965 Watts riots, the videotaped 1991 beating by LAPD officers of Rodney King, and the riot following their acquittal have been suggested by some people to be evidence that U.S. police are dangerously lacking in appropriate controls.
The fact that this trend has occurred contemporaneously with the rise of the civil rights movement, the "War on Drugs", and a precipitous rise in violent crime from the 1960s to the 1990s has made questions surrounding the role, administration and scope of police authority increasingly complicated.
Police departments and the local governments that oversee them in some jurisdictions have attempted to mitigate some of these issues through community outreach programs and community policing to make the police more accessible to the concerns of local communities, by working to increase hiring diversity, by updating training of police in their responsibilities to the community and under the law, and by increased oversight within the department or by civilian commissions.
In cases in which such measures have been lacking or absent, civil lawsuits have been brought by the United States Department of Justice against local law enforcement agencies, authorized under the 1994 Violent Crime Control and Law Enforcement Act. This has compelled local departments to make organizational changes, enter into consent decree settlements to adopt such measures, and submit to oversight by the Justice Department.
In May 2020, a global movement to increase scrutiny of police violence grew in popularity, starting in Minneapolis, Minnesota with the murder of George Floyd. Calls for defunding of the police and full abolition of the police gained larger support in the United States as more criticized systemic racism in policing.
Critics also argue that sometimes this abuse of force or power can extend to police officer civilian life as well. For example, critics note that women in around 40% of police officer families have experienced domestic violence and that police officers are convicted of misdemeanors and felonies at a rate of more than six times higher than concealed carry weapon permit holders.
=== Protection of individuals ===
The Supreme Court of the United States has consistently ruled that law enforcement officers in the U.S. have no duty to protect any individual, only to enforce the law in general. This is despite the motto of many police departments in the U.S. being a variation of "protect and serve"; regardless, many departments generally expect their officers to protect individuals. The first case to make such a ruling was South v. State of Maryland in 1855, and the most recent was Town of Castle Rock v. Gonzales in 2005.
In contrast, the police are entitled to protect private rights in some jurisdictions. To ensure that the police would not interfere in the regular competencies of the courts of law, some police acts require that the police may only interfere in such cases where protection from courts cannot be obtained in time, and where, without interference of the police, the realization of the private right would be impeded. This would, for example, allow police to establish a restaurant guest's identity and forward it to the innkeeper in a case where the guest cannot pay the bill at nighttime because his wallet had just been stolen from the restaurant table.
In addition, there are federal law enforcement agencies in the United States whose mission includes providing protection for executives such as the president and accompanying family members, visiting foreign dignitaries, and other high-ranking individuals. Such agencies include the U.S. Secret Service and the U.S. Park Police.
== See also ==
Lists
List of basic law enforcement topics
List of countries and dependencies by number of police officers
List of countries with annual rates and counts for killings by law enforcement officers
List of law enforcement agencies
List of protective service agencies
Police rank
== References ==
== Further reading ==
Mitrani, Samuel (2014). The Rise of the Chicago Police Department: Class and Conflict, 1850–1894. University of Illinois Press, 272 pages.
Interview with Sam Mitrani: "The Function of Police in Modern Society: Peace or Control?" (January 2015), The Real News
== External links ==
United Nations Police Division | Wikipedia/Law_enforcer |
A killer application (often shortened to killer app) is any software that is so necessary or desirable that it proves the core value of some larger technology, such as its host computer hardware, video game console, software platform, or operating system. Consumers would buy the host platform just to access that application, possibly substantially increasing sales of its host platform.
== Usage ==
One mark of a good computer is the appearance of a piece of software specifically written for that machine that does something that, for a while at least, can only be done on that machine.
The earliest recorded use of the term "killer app" in print is in the May 24, 1988 issue of PC Week: "Everybody has only one killer application. The secretary has a word processor. The manager has a spreadsheet."
The definition of "killer app" came up during the deposition of Bill Gates in the United States v. Microsoft Corp. antitrust case. He had written an email in which he described Internet Explorer as a killer app. In the questioning, he said that the term meant "a popular application," and did not connote an application that would fuel sales of a larger product or one that would supplant its competition, as the Microsoft Computer Dictionary defined it.
Introducing the iPhone in 2007, Steve Jobs said that "the killer app is making calls". Reviewing the iPhone's first decade, David Pierce for Wired wrote that although Jobs prioritized a good experience making calls in the phone's development, other features of the phone soon became more important, such as its data connectivity and the later ability to install third-party software.
The World Wide Web (through the web browsers Mosaic and Netscape Navigator) is the killer app that popularized the Internet, as is the music sharing program Napster.
== Examples ==
Although the term was coined in the late 1980s one of the first retroactively recognized examples of a killer application is the VisiCalc spreadsheet, released in 1979 for the Apple II. Because it was not released for other computers for 12 months, people spent US$100 (equivalent to $400 in 2024) for the software first, then $2,000 to $10,000 (equivalent to $9,000 to $43,000) on the requisite Apple II. BYTE wrote in 1980, "VisiCalc is the first program available on a microcomputer that has been responsible for sales of entire systems", and Creative Computing's VisiCalc review is subtitled "reason enough for owning a computer". Others also chose to develop software, such as EasyWriter, for the Apple II first because of its higher sales, helping Apple defeat rivals Commodore International and Tandy Corporation.
The co-creator of WordStar, Seymour Rubinstein, argued that the honor of the first killer app should go to that popular word processor, given that it came out a year before VisiCalc and that it gave a reason for people to buy a computer. However, whereas WordStar could be considered an incremental improvement (albeit a large one) over smart typewriters like the IBM Electronic Selectric Composer, VisiCalc, with its ability to instantly recalculate rows and columns, introduced an entirely new paradigm and capability.
Although released four years after VisiCalc, Lotus 1-2-3 also benefited sales of the IBM PC. Noting that computer purchasers did not want PC compatibility as much as compatibility with certain PC software, InfoWorld suggested "let's tell it like it is. Let's not say 'PC compatible', or even 'MS-DOS compatible'. Instead, let's say '1-2-3 compatible'."
The UNIX Operating System became a killer application for the DEC PDP-11 and VAX-11 minicomputers during roughly 1975–1985. Many of the PDP-11 and VAX-11 processors never ran DEC's operating systems (RSTS or VAX/VMS), but instead, they ran UNIX, which was first licensed in 1975. To get a virtual-memory UNIX (BSD 3.0), requires a VAX-11 computer. Many universities wanted a general-purpose timesharing system that would meet the needs of students and researchers. Early versions of UNIX included free compilers for C, Fortran, and Pascal, at a time when offering even one free compiler was unprecedented. From its inception, UNIX drives high-quality typesetting equipment and later PostScript printers using the nroff/troff typesetting language, and this was also unprecedented. UNIX is the first operating system offered in source-license form (a university license cost only $10,000, less than a PDP-11), allowing it to run on an unlimited number of machines, and allowing the machines to interface to any type of hardware because the UNIX I/O system is extensible.
=== Applications and operating systems ===
1979: Apple II: VisiCalc (first spreadsheet program and killer app)
1979: TRS-80, CP/M systems: WordStar 1982: ported to CP/M-86 and IBM PC compatible/MS-DOS
1983: IBM PC compatible/MS-DOS: Lotus 1-2-3 (spreadsheet)
1985: Macintosh: Aldus (now Adobe) PageMaker (first desktop publishing program)
1985: AmigaOS: Deluxe Paint, Video Toaster, Prevue Guide
1993: Acorn Archimedes: Sibelius
1995: Windows 95
=== Video games ===
The term applies to video games that persuade consumers to buy a particular video game console or accessory, by virtue of platform exclusivity. Such a game is also called a "system seller".
Space Invaders, originally released for arcades in 1978, became a killer app when it was ported to the Atari VCS console in 1980, quadrupling sales of the three-year-old console.
Star Raiders, released in 1980, was the first killer app computer game. BYTE named it the single most important reason for sales of Atari 400 and 800 computers. Another was Eastern Front (1941), released in 1981.
In 1996, Computer Gaming World wrote that Wizardry: Proving Grounds of the Mad Overlord (1981) "sent AD&D fans scrambling to buy Apple IIs".
The Famicom home port of Xevious is considered the console's first killer app, which caused system sales to jump by nearly 2 million units.
Computer Gaming World stated that The Legend of Zelda on the Nintendo Entertainment System, Phantasy Star II on the Sega Genesis, and Far East of Eden for the NEC TurboGrafx-16 were killer apps for their consoles.
The Super Mario, Final Fantasy, and Dragon Quest series were killer apps for Nintendo's Famicom and Super Famicom consoles in Japan.
John Madden Football's popularity in 1990 helped the Genesis gain market share against the Super NES in North America.
Sonic the Hedgehog, released in 1991, was hailed as a killer app as it revived sales of the three-year-old Genesis.
Mortal Kombat helped pushed the sales of the Genesis due to being uncensored unlike the Nintendo version.
Streets of Rage became a system seller for the Mega Drive/Genesis in the UK.
Street Fighter II, originally released for arcades in 1991, became a system-seller for the Super NES when it was ported to the platform in 1992.
Donkey Kong Country for the SNES helped Nintendo's comeback against Sega.
Myst and The 7th Guest, both released in 1993, drove adoption of CD-ROM drives for personal computers.
Virtua Fighter 2, Nights into Dreams, and Sakura Wars are the killer apps for the Sega Saturn.
Euro 96 and Sega Rally Championship are major system-sellers for the Sega Saturn in the United Kingdom, with the latter becoming the fastest selling CD game.
Die Hard Arcade and Fighters Megamix boosted the Sega Saturn's sales in the United States.
Ridge Racer, Tekken, Wipeout, Tomb Raider, and Crash Bandicoot are the killer apps for the PlayStation. Tomb Raider was released for the Sega Saturn first and for MS-DOS at the same time, but the games contributed substantially to the original PlayStation's early success.
Final Fantasy VII is another killer app for the PlayStation. Computing Japan magazine said that it was largely responsible for the PlayStation's global installed base increasing 60% from 10 million units sold by November 1996 to 16 million units sold by May 1997.
Super Mario 64 and GoldenEye 007 are the killer apps for the Nintendo 64.
Virtua Fighter 3, Sonic Adventure, and The House of the Dead 2 are the killer apps for the Dreamcast.
NFL 2K is a killer app for the Dreamcast in the United States.
Gran Turismo 3 and the Grand Theft Auto games are the killer apps for the PlayStation 2.
Star Wars Rogue Squadron II: Rogue Leader, Super Smash Bros. Melee, and Super Mario Sunshine are the killer apps for the GameCube.
Halo: Combat Evolved and Halo 2 are the killer apps for the Xbox, and the subsequent series entries became killer apps for the Xbox 360 and Xbox One.
Many video game and technology critics call Xbox Live a more general killer app for the Xbox.
Blue Dragon is a killer app for the Xbox 360 in Japan.
Wii Sports is the killer app for the Wii.
Metal Gear Solid 4: Guns of the Patriots boosted PlayStation 3 sales.
Mario Kart 8 is a killer app for the Wii U in the UK.
The Legend of Zelda: Breath of the Wild is a killer app for the Nintendo Switch.
Half-Life: Alyx is a killer app for virtual reality headsets, as the first true AAA virtual reality game. Sales of VR headsets such as the Valve Index increased dramatically after its announcement, suggesting users bought the product specifically for the game.
Microsoft Flight Simulator was called a killer app for Xbox Game Studios's Xbox Game Pass subscription, and the Xbox Series X/S.
Pokémon games are killer apps for Nintendo handhelds, often topping the best-selling charts for whatever system they appear on.
== See also ==
Disruptive innovation
Unique selling point
Vendor lock-in
Use case
== References == | Wikipedia/Killer_application |
3D computer graphics, sometimes called CGI, 3D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later (possibly as an animation) or displayed in real time.
3D computer graphics, contrary to what the name suggests, are most often displayed on two-dimensional displays. Unlike 3D film and similar techniques, the result is two-dimensional, without visual depth. More often, 3D graphics are being displayed on 3D displays, like in virtual reality systems.
3D graphics stand in contrast to 2D computer graphics which typically use completely different methods and formats for creation and rendering.
3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and similarly, 3D may use some 2D rendering techniques.
The objects in 3D computer graphics are often referred to as 3D models. Unlike the rendered image, a model's data is contained within a graphical data file. A 3D model is a mathematical representation of any three-dimensional object; a model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or it can be used in non-graphical computer simulations and calculations. With 3D printing, models are rendered into an actual 3D physical representation of themselves, with some limitations as to how accurately the physical model can match the virtual model.
== History ==
William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. An early example of interactive 3-D computer graphics was explored in 1963 by the Sketchpad program at Massachusetts Institute of Technology's Lincoln Laboratory. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.
3-D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3-D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II.
Virtual Reality 3D is a version of 3D computer graphics. With the first headset coming out in the late 1950s, the popularity of VR didn't take off until the 2000s. In 2012 the Oculus was released and since then, the 3D VR headset world has expanded.
== Overview ==
3D computer graphics production workflow falls into three basic phases:
3D modeling – the process of forming a computer model of an object's shape
Layout and CGI animation – the placement and movement of objects (models, lights etc.) within a scene
3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate (rasterize the scene into) an image
=== Modeling ===
The modeling describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects (Polygonal Modeling, Patch Modeling and NURBS Modeling are some popular tools used in 3D modeling). Models can also be produced procedurally or via physical simulation.
Basically, a 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertices (a triangle). A polygon of n points is an n-gon. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.
=== Layout and animation ===
Before rendering into an image, objects must be laid out in a 3D scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion-capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion.
Stop Motion has multiple categories within such as Claymation, Cutout, Silhouette, Lego, Puppets, and Pixelation.
Claymation is the use of models made of clay used for an animation. Some examples are Clay Fighter and Clay Jam.
Lego animation is one of the more common types of stop motion. Lego stop motion is the use of the figures themselves moving around. Some examples of this are Lego Island and Lego Harry Potter.
=== Materials and textures ===
Materials and textures are properties that the render engine uses to render the model. One can give the model materials to tell the render engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump map or normal map. It can be also used to deform the model itself using a displacement map.
=== Rendering ===
Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3-D computer graphics software or a 3-D graphics API.
Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3-D modeling and CAD software may perform 3-D rendering as well (e.g., Autodesk 3ds Max or Blender), exclusive 3-D rendering software also exists (e.g., OTOY's Octane Rendering Engine, Maxon's Redshift)
Examples of 3-D rendering
== Software ==
3-D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D rendering or produces 3-D models for analytical, scientific and industrial purposes.
=== File formats ===
There are many varieties of files supporting 3-D graphics, for example, Wavefront .obj files, .fbx and .x DirectX files. Each file type generally tends to have its own unique data structure.
Each file format can be accessed through their respective applications, such as DirectX files, and Quake. Alternatively, files can be accessed through third-party standalone programs, or via manual decompilation.
=== Modeling ===
3-D modeling software is a class of 3-D computer graphics software used to produce 3-D models. Individual programs of this class are called modeling applications or modelers.
3-D modeling starts by describing 3 display models : Drawing Points, Drawing Lines and Drawing triangles and other Polygonal patches.
3-D modelers allow users to create and alter models via their 3-D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out.
3-D modelers can export their models to files, which can then be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications.
Most 3-D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation).
=== Computer-aided design (CAD) ===
Computer aided design software may employ the same fundamental 3-D modeling techniques that 3-D modeling software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing, Finite element analysis, product lifecycle management, 3D printing and computer-aided architectural design.
=== Complementary tools ===
After producing a video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final Cut Pro at the mid-level, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves.
Use of real-time computer graphics engines to create a cinematic production is called machinima.
== Other types of 3D appearance ==
=== Photorealistic 2D graphics ===
Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photo-realistic effects without the use of filters.
=== 2.5D ===
Some video games use 2.5D graphics, involving restricted projections of three-dimensional environments, such as isometric graphics or virtual cameras with fixed angles, either as a way to improve performance of the game engine or for stylistic and gameplay concerns. By contrast, games using 3D computer graphics without such restrictions are said to use true 3D.
=== Other forms of animation ===
Cutout is the use of flat materials such as paper. Everything is cut out of paper including the environment, characters, and even some props. An example of this is Paper Mario. Silhouette is similar to cutouts except they are one solid color, black. Limbo is an example of this. Puppets are dolls and different puppets used in the game. An example of this would be Yoshi's Wooly World. Pixelation is when the entire game appears pixelated, this includes the characters and the environment around them. One example of this is seen in Shovel Knight.
== See also ==
Graphics processing unit (GPU)
List of 3D computer graphics software
3D data acquisition and object reconstruction
3D projection on 2D planes
Geometry processing
Isometric graphics in video games and pixel art
List of stereoscopic video games
Medical animation
Render farm
== References ==
== External links ==
A Critical History of Computer Graphics and Animation (Wayback Machine copy)
How Stuff Works - 3D Graphics
History of Computer Graphics series of articles (Wayback Machine copy)
How 3D Works - Explains 3D modeling for an illuminated manuscript | Wikipedia/3D_computer_graphics_software |
A server is a computer that provides information to other computers called "clients" on a computer network. This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.
Client–server systems are usually most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgment. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.
== History ==
The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in Kendall (1953) (along with "service"), the paper that introduced Kendall's notation. In earlier papers, such as the Erlang (1909), more concrete terms such as "[telephone] operators" are used.
In computing, "server" dates at least to RFC 5 (1969), one of the earliest documents describing ARPANET (the predecessor of Internet), and is contrasted with "user", distinguishing two types of host: "server-host" and "user-host". The use of "serving" also dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host".
The Jargon File defines server in the common sense of a process performing service for requests, usually remote, with the 1981 version reading:
SERVER n. A kind of DAEMON which performs a service for the requester, which often runs on a computer other than the one on which the server runs. The average utilization of a server in the early 2000s was 5 to 15%, but with the adoption of virtualization this figure started to increase to reduce the number of servers needed.
== Operation ==
Strictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as verb and as noun respectively) are frequently used, though servicer and servant are not. The word service (noun) may refer to the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve [up] web pages to users" or "service their requests".
The server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general-purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.
While request–response is the most common client-server design, there are others, such as the publish–subscribe pattern. In the publish-subscribe pattern, clients register with a pub-sub server, subscribing to specified types of messages; this initial registration may be done by request-response. Thereafter, the pub-sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request-response.
== Purpose ==
The role of a server is to share data as well as to share resources and distribute work. A server computer can serve its own computer programs as well; depending on the scenario, this could be part of a quid pro quo transaction, or simply a technical possibility. The following table shows several scenarios in which a server is used.
Almost the entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers. There are exceptions that do not use dedicated servers; for example, peer-to-peer file sharing and some implementations of telephony (e.g. pre-Microsoft Skype).
== Hardware ==
Hardware requirement for servers vary widely, depending on the server's purpose and its software. Servers often are more powerful and expensive than the clients that connect to them.
The name server is used both for the hardware and software pieces. For the hardware servers, it is usually limited to mean the high-end machines although software servers can run on a variety of hardwares.
Since servers are usually accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface (GUI). They are configured and managed remotely. Remote management can be conducted via various methods including Microsoft Management Console (MMC), PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo.
=== Large servers ===
Large traditional single servers would need to be run for long periods without interruption. Availability would have to be very high, making hardware reliability and durability extremely important. Mission-critical enterprise servers would be very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime. Uninterruptible power supplies might be incorporated to guard against power failure. Servers typically include hardware redundancy such as dual power supplies, RAID disk systems, and ECC memory, along with extensive pre-boot memory testing and verification. Critical components might be hot swappable, allowing technicians to replace them on the running server without shutting it down, and to guard against overheating, servers might have more powerful fans or use water cooling. They will often be able to be configured, powered up and down, or rebooted remotely, using out-of-band management, typically based on IPMI. Server casings are usually flat and wide, and designed to be rack-mounted, either on 19-inch racks or on Open Racks.
These types of servers are often housed in dedicated data centers. These will normally have very stable power and Internet and increased security. Noise is also less of a concern, but power consumption and heat output can be a serious issue. Server rooms are equipped with air conditioning devices.
=== Clusters ===
A server farm or server cluster is a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. Modern data centers are now often built of very large clusters of much simpler servers, and there is a collaborative effort, Open Compute Project around this concept.
=== Appliances ===
A class of small specialist servers called network appliances are generally at the low end of the scale, often being smaller than common desktop computers.
=== Mobile ===
A mobile server has a portable form factor, e.g. a laptop. In contrast to large data centers or rack servers, the mobile server is designed for on-the-road or ad hoc deployment into emergency, disaster or temporary environments where traditional servers are not feasible due to their power requirements, size, and deployment time. The main beneficiaries of so-called "server on the go" technology include network managers, software or database developers, training centers, military personnel, law enforcement, forensics, emergency relief groups, and service organizations. To facilitate portability, features such as the keyboard, display, battery (uninterruptible power supply, to provide power redundancy in case of failure), and mouse are all integrated into the chassis.
== Operating systems ==
On the Internet, the dominant operating systems among servers are UNIX-like open-source distributions, such as those based on Linux and FreeBSD, with Windows Server also having a significant share. Proprietary operating systems such as z/OS and macOS Server are also deployed, but in much smaller numbers. Servers that run Linux are commonly used as Webservers or Databanks. Windows Servers are used for Networks that are made out of Windows Clients.
Specialist server-oriented operating systems have traditionally had features such as:
GUI not available or optional
Ability to reconfigure and update both hardware and software to some extent without restart
Advanced backup facilities to permit regular and frequent online backups of critical data,
Transparent data transfer between different volumes or devices
Flexible and advanced networking capabilities
Automation capabilities such as daemons in UNIX and services in Windows
Tight system security, with advanced user, resource, data, and memory protection.
Advanced detection and alerting on conditions such as overheating, processor and disk failure.
In practice, today many desktop and server operating systems share similar code bases, differing mostly in configuration.
== Energy consumption ==
In 2010, data centers (servers, cooling, and other electrical infrastructure) were responsible for 1.1–1.5% of electrical energy consumption worldwide and 1.7–2.2% in the United States. One estimate is that total energy consumption for information and communications technology saves more than 5 times its carbon footprint in the rest of the economy by increasing efficiency.
Global energy consumption is increasing due to the increasing demand of data and bandwidth. Natural Resources Defense Council (NRDC) states that data centers used 91 billion kilowatt hours (kWh) electrical energy in 2013 which accounts to 3% of global electricity usage.
Environmental groups have placed focus on the carbon emissions of data centers as it accounts to 200 million metric tons of carbon dioxide in a year.
== See also ==
Peer-to-peer
== Notes ==
== References ==
== Further reading == | Wikipedia/Server_application |
Application virtualization is a software technology that encapsulates computer programs from the underlying operating system on which they are executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees.
In this context, the term "virtualization" refers to the artifact being encapsulated (application), which is quite different from its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).
== Description ==
Full application virtualization requires a virtualization layer. Application virtualization layers replace part of the runtime environment normally provided by the operating system. The layer intercepts all disk operations of virtualized applications and transparently redirects them to a virtualized location, often a single file. The application remains unaware that it accesses a virtual resource instead of a physical one. Since the application is now working with one file instead of many files spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side by side.
== Benefits ==
Application virtualization allows applications to run in environments that do not suit the native application. For example, Wine allows some Microsoft Windows applications to run on Linux.
Application virtualization reduces system integration and administration costs by maintaining a common software baseline across multiple diverse computers in an organization. Lesser integration protects the operating system and other applications from poorly written or buggy code. In some cases, it provides memory protection, IDE-style debugging features and may even run applications that are not written correctly, for example applications that try to store user data in a read-only system-owned location. (This feature assists in the implementation of the principle of least privilege by removing the requirement for end-users to have administrative privileges in order to run poorly written applications.) It allows incompatible applications to run side by side, at the same time and with minimal regression testing against one another. Isolating applications from the operating system has security benefits as well, as the exposure of the virtualized application does not automatically entail the exposure of the entire OS.
Application virtualization also enables simplified operating system migrations. Applications can be transferred to removable media or between computers without the need of installing them, becoming portable software.
Application virtualization uses fewer resources than a separate virtual machine.
== Limitations ==
Not all computer programs can be virtualized. Some examples include applications that require a device driver (a form of integration with the OS) and 16-bit applications that need to run in shared memory space. Anti-virus programs and applications that require heavy OS integration, such as WindowBlinds or StyleXP are difficult to virtualize.
Moreover, in software licensing, application virtualization bears great licensing pitfalls mainly because both the application virtualization software and the virtualized applications must be correctly licensed.
While application virtualization can address file and Registry-level compatibility issues between legacy applications and newer operating systems, applications that don't manage the heap correctly will not execute on Windows Vista as they still allocate memory in the same way, regardless of whether they are virtualized. For this reason, specialist application compatibility fixes (shims) may still be needed, even if the application is virtualized.
Functional discrepancies within the multicompatibility model are an additional limitation, where utility-driven access points are shared within a public network. These limitations are overcome by designating a system level share point driver.
== Related technologies ==
Technology categories that fall under application virtualization include:
Application streaming. Pieces of the application's code, data, and settings are delivered when they're first needed, instead of the entire application being delivered before startup. Running the packaged application may require the installation of a lightweight client application. Packages are usually delivered over a protocol such as HTTP, CIFS or RTSP.
Remote Desktop Services (formerly called Terminal Services) is a server-based computing/presentation virtualization component of Microsoft Windows that allows a user to access applications and data hosted on a remote computer over a network. Remote Desktop Services sessions run in a single shared-server operating system (e.g. Windows Server 2008 R2 and later) and are accessed using the Remote Desktop Protocol.
Desktop virtualization software technologies improve portability, manageability and compatibility of a personal computer's desktop environment by separating part or all of the desktop environment and associated applications from the physical client device that is used to access it. A common implementation of this approach is to host multiple desktop operating system instances on a server hardware platform running a hypervisor. This is generally referred to as "virtual desktop infrastructure" (VDI).
== See also ==
Workspace virtualization
OS-level virtualization ("containerization")
Portable application creators
Comparison of application virtualization software
Shim (computing)
Virtual application
Emulator
== References == | Wikipedia/Application_virtualization |
A portable application (portable app), sometimes also called standalone software, is a computer program designed to operate without changing other files or requiring other software to be installed. In this way, it can be easily added to, run, and removed from any compatible computer without setup or side-effects.
In practical terms, a portable application often stores user-created data and configuration settings in the same directory it resides in. This makes it easier to transfer the program with the user's preferences and data between different computers. A program that doesn't have any configuration options can also be a portable application.
Portable applications can be stored on any data storage device, including internal mass storage, a file share, cloud storage or external storage such as USB drives, pen drives and floppy disks—storing its program files and any configuration information and data on the storage medium alone. If no configuration information is required a portable program can be run from read-only storage such as CD-ROMs and DVD-ROMs. Some applications are available in both installable and portable versions.
Some applications which are not portable by default do support optional portability through other mechanisms, the most common being command-line arguments. Examples might include /portable to simply instruct the program to behave as a portable program, or --cfg=/path/inifile to specify the configuration file location.
Like any application, portable applications must be compatible with the computer system hardware and operating system.
Depending on the operating system, portability is more or less complex to implement; to operating systems such as AmigaOS, all applications are by definition portable.
== Portable Windows applications ==
Most portable applications do not leave files or settings on the host computer or modify the existing system and its configuration. The application may not write to the Windows registry or store its configuration files (such as an INI file) in the user's profile, but today, many portables do; many, however, still store their configuration files in the portable directory. Another possibility, since file paths will often differ on changing computers due to variation in drive letter assignments, is that portable applications may store them in a relative format. While some applications have options to support this behavior, many programs are not designed to do this. A common technique for such programs is the use of a launcher program to copy necessary settings and files to the host computer when the application starts and move them back to the application's directory when it closes.
An alternative strategy for achieving application portability within Windows, without requiring application source code changes, is application virtualization: An application is "sequenced" or "packaged" against a runtime layer that transparently intercepts its file system and registry calls, then redirects these to other persistent storage without the application's knowledge. This approach leaves the application itself unchanged, yet portable.
The same approach is used for individual application components: run-time libraries, COM components or ActiveX, not only for the entire application. As a result, when individual components are ported in such manner they are able to be: integrated into original portable applications, repeatedly instantiated (virtually installed) with different configurations/settings on the same operating system (OS) without mutual conflicts. As the ported components do not affect the OS-protected related entities (registry and files), the components will not require administrative privileges for installation and management.
Microsoft saw the need for an application-specific registry for its Windows operating system as far back as 2005. It eventually incorporated some of this technology, using the techniques mentioned above, via its Application Compatibility Database using its Detours code library, into Windows XP. It did not make any of this technology available via its system APIs.
== Portability on Unix-like systems ==
Programs written with a Unix-like base in mind often do not make any assumptions. Whereas many Windows programs assume the user is an administrator—something very prevalent in the days of Windows 95/98/ME (and to some degree in Windows XP/2000, though not in Windows Vista or Windows 7)—such would quickly result in "Permission denied" errors in Unix-like environments since users will be in an unprivileged state much more often. Programs are therefore generally designed to use the HOME environment variable to store settings (e.g. $HOME/.w3m for the w3m browser). The dynamic linker provides an environment variable LD_LIBRARY_PATH that programs can use to load libraries from non-standard directories. Assuming /mnt contains the portable programs and configuration, a command line may look like:
HOME=/mnt/home/user LD_LIBRARY_PATH=/mnt/usr/lib /mnt/usr/bin/w3m www.example.com
A Linux application without need for a user-interaction (e.g. adapting a script or environment variable) on varying directory paths can be achieved with the GCC Linker option $ORIGIN which allows a relative library search path.
Not all programs honor this—some completely ignore $HOME and instead do a user look-up in /etc/passwd to find the home directory, therefore thwarting portability.
There are also cross-distro package formats that do not require admin rights to run, like Autopackage, AppImage, or CDE, but which gained only limited acceptance and support in the Linux community in the 2000s. Around 2015 the idea of portable and distro independent packing for the Linux ecosystem got more traction when Linus Torvalds discussed this topic on the DebConf 2014 and endorsed later AppImage for his dive log application Subsurface. For instance, MuseScore and Krita followed in 2016 and started to use AppImage builds for software deployment. RedHat released in 2016 the Flatpak system, which is a successor of Alexander Larsson's glick project which was inspired by klik (now called AppImage). Similarly, Canonical released in 2016 Snap packages for Ubuntu and many other Linux distros.
Many Mac applications that can be installed by drag-and-drop are inherently portable as Mac application bundles. Examples include Mozilla Firefox, Skype and Google Chrome which do not require admin access and do not need to be placed into a central, restricted area. Applications placed into /Users/username/Applications (~/Applications) are registered with macOS LaunchServices in the same way as applications placed into the main /Applications folder. For example, right-clicking a file in Finder and then selecting "Open With..." will show applications available from both /Applications and ~/Applications. Developers can create Mac product installers which allow the user to perform a home directory install, labelled "Install for me only" in the Installer user interface. Such an installation is performed as the user.
== See also ==
Load drive
List of portable software
WinPenPack
Portable application creators
PortableApps.com
U3
Application virtualization
Turbo (software)
VMware ThinApp
Live USB
Ceedo
Portable-VirtualBox
Windows To Go
Data portability
Interoperability
== References == | Wikipedia/Portable_application |
Software engineering is a branch of both computer science and engineering focused on designing, developing, testing, and maintaining software applications. It involves applying engineering principles and computer programming expertise to develop software systems that meet user needs.
The terms programmer and coder overlap software engineer, but they imply only the construction aspect of a typical software engineer workload.
A software engineer applies a software development process, which involves defining, implementing, testing, managing, and maintaining software systems, as well as developing the software development process itself.
== History ==
Beginning in the 1960s, software engineering was recognized as a separate field of engineering.
The development of software engineering was seen as a struggle. Problems included software that was over budget, exceeded deadlines, required extensive debugging and maintenance, and unsuccessfully met the needs of consumers or was never even completed.
In 1968, NATO held the first software engineering conference, where issues related to software were addressed. Guidelines and best practices for the development of software were established.
The origins of the term software engineering have been attributed to various sources. The term appeared in a list of services offered by companies in the June 1965 issue of "Computers and Automation" and was used more formally in the August 1966 issue of Communications of the ACM (Volume 9, number 8) in "President's Letter to the ACM Membership" by Anthony A. Oettinger. It is also associated with the title of a NATO conference in 1968 by Professor Friedrich L. Bauer. Margaret Hamilton described the discipline of "software engineering" during the Apollo missions to give what they were doing legitimacy. At the time, there was perceived to be a "software crisis". The 40th International Conference on Software Engineering (ICSE 2018) celebrates 50 years of "Software Engineering" with the Plenary Sessions' keynotes of Frederick Brooks and Margaret Hamilton.
In 1984, the Software Engineering Institute (SEI) was established as a federally funded research and development center headquartered on the campus of Carnegie Mellon University in Pittsburgh, Pennsylvania, United States.
Watts Humphrey founded the SEI Software Process Program, aimed at understanding and managing the software engineering process. The Process Maturity Levels introduced became the Capability Maturity Model Integration for Development (CMMI-DEV), which defined how the US Government evaluates the abilities of a software development team.
Modern, generally accepted best practices for software engineering have been collected by the ISO/IEC JTC 1/SC 7 subcommittee and published as the Software Engineering Body of Knowledge (SWEBOK). Software engineering is considered one of the major computing disciplines.
== Terminology ==
=== Definition ===
Notable definitions of software engineering include:
"The systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software."—The Bureau of Labor Statistics—IEEE Systems and software engineering – Vocabulary
"The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software."—IEEE Standard Glossary of Software Engineering Terminology
"An engineering discipline that is concerned with all aspects of software production."—Ian Sommerville
"The establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines."—Fritz Bauer
"A branch of computer science that deals with the design, implementation, and maintenance of complex computer programs."—Merriam-Webster
"'Software engineering' encompasses not just the act of writing code, but all of the tools and processes an organization uses to build and maintain that code over time. [...] Software engineering can be thought of as 'programming integrated over time.'"—Software Engineering at Google
The term has also been used less formally:
as the informal contemporary term for the broad range of activities that were formerly called computer programming and systems analysis
as the broad term for all aspects of the practice of computer programming, as opposed to the theory of computer programming, which is formally studied as a sub-discipline of computer science
as the term embodying the advocacy of a specific approach to computer programming, one that urges that it be treated as an engineering discipline rather than an art or a craft, and advocates the codification of recommended practices
=== Suitability ===
Individual commentators have disagreed sharply on how to define software engineering or its legitimacy as an engineering discipline. David Parnas has said that software engineering is, in fact, a form of engineering. Steve McConnell has said that it is not, but that it should be. Donald Knuth has said that programming is an art and a science. Edsger W. Dijkstra claimed that the terms software engineering and software engineer have been misused in the United States.
== Workload ==
=== Requirements analysis ===
Requirements engineering is about elicitation, analysis, specification, and validation of requirements for software. Software requirements can be functional, non-functional or domain.
Functional requirements describe expected behaviors (i.e. outputs). Non-functional requirements specify issues like portability, security, maintainability, reliability, scalability, performance, reusability, and flexibility. They are classified into the following types: interface constraints, performance constraints (such as response time, security, storage space, etc.), operating constraints, life cycle constraints (maintainability, portability, etc.), and economic constraints. Knowledge of how the system or software works is needed when it comes to specifying non-functional requirements. Domain requirements have to do with the characteristic of a certain category or domain of projects.
=== Design ===
Software design is the process of making high-level plans for the software. Design is sometimes divided into levels:
Interface design plans the interaction between a system and its environment as well as the inner workings of the system.
Architectural design plans the major components of a system, including their responsibilities, properties, and interfaces between them.
Detailed design plans internal elements, including their properties, relationships, algorithms and data structures.
=== Construction ===
Software construction typically involves programming (a.k.a. coding), unit testing, integration testing, and debugging so as to implement the design."Software testing is related to, but different from, ... debugging".
Testing during this phase is generally performed by the programmer and with the purpose to verify that the code behaves as designed and to know when the code is ready for the next level of testing.
=== Testing ===
Software testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the software under test.
When described separately from construction, testing typically is performed by test engineers or quality assurance instead of the programmers who wrote it. It is performed at the system level and is considered an aspect of software quality.
=== Program analysis ===
Program analysis is the process of analyzing computer programs with respect to an aspect such as performance, robustness, and security.
=== Maintenance ===
Software maintenance refers to supporting the software after release. It may include but is not limited to: error correction, optimization, deletion of unused and discarded features, and enhancement of existing features.
Usually, maintenance takes up 40% to 80% of project cost.
== Education ==
Knowledge of computer programming is a prerequisite for becoming a software engineer. In 2004, the IEEE Computer Society produced the SWEBOK, which has been published as ISO/IEC Technical Report 1979:2005, describing the body of knowledge that they recommend to be mastered by a graduate software engineer with four years of experience.
Many software engineers enter the profession by obtaining a university degree or training at a vocational school. One standard international curriculum for undergraduate software engineering degrees was defined by the Joint Task Force on Computing Curricula of the IEEE Computer Society and the Association for Computing Machinery, and updated in 2014. A number of universities have Software Engineering degree programs; as of 2010, there were 244 Campus Bachelor of Software Engineering programs, 70 Online programs, 230 Masters-level programs, 41 Doctorate-level programs, and 69 Certificate-level programs in the United States.
In addition to university education, many companies sponsor internships for students wishing to pursue careers in information technology. These internships can introduce the student to real-world tasks that typical software engineers encounter every day. Similar experience can be gained through military service in software engineering.
=== Software engineering degree programs ===
Half of all practitioners today have degrees in computer science, information systems, or information technology. A small but growing number of practitioners have software engineering degrees. In 1987, the Department of Computing at Imperial College London introduced the first three-year software engineering bachelor's degree in the world; in the following year, the University of Sheffield established a similar program. In 1996, the Rochester Institute of Technology established the first software engineering bachelor's degree program in the United States; however, it did not obtain ABET accreditation until 2003, the same year as Rice University, Clarkson University, Milwaukee School of Engineering, and Mississippi State University. In 1997, PSG College of Technology in Coimbatore, India was the first to start a five-year integrated Master of Science degree in Software Engineering.
Since then, software engineering undergraduate degrees have been established at many universities. A standard international curriculum for undergraduate software engineering degrees, SE2004, was defined by a steering committee between 2001 and 2004 with funding from the Association for Computing Machinery and the IEEE Computer Society. As of 2004, about 50 universities in the U.S. offer software engineering degrees, which teach both computer science and engineering principles and practices. The first software engineering master's degree was established at Seattle University in 1979. Since then, graduate software engineering degrees have been made available from many more universities. Likewise in Canada, the Canadian Engineering Accreditation Board (CEAB) of the Canadian Council of Professional Engineers has recognized several software engineering programs.
In 1998, the US Naval Postgraduate School (NPS) established the first doctorate program in Software Engineering in the world. Additionally, many online advanced degrees in Software Engineering have appeared such as the Master of Science in Software Engineering (MSE) degree offered through the Computer Science and Engineering Department at California State University, Fullerton. Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers. ETS (École de technologie supérieure) University and UQAM (Université du Québec à Montréal) were mandated by IEEE to develop the Software Engineering Body of Knowledge (SWEBOK), which has become an ISO standard describing the body of knowledge covered by a software engineer.
== Profession ==
Legal requirements for the licensing or certification of professional software engineers vary around the world. In the UK, there is no licensing or legal requirement to assume or use the job title Software Engineer. In some areas of Canada, such as Alberta, British Columbia, Ontario, and Quebec, software engineers can hold the Professional Engineer (P.Eng) designation and/or the Information Systems Professional (I.S.P.) designation. In Europe, Software Engineers can obtain the European Engineer (EUR ING) professional title. Software Engineers can also become professionally qualified as a Chartered Engineer through the British Computer Society.
In the United States, the NCEES began offering a Professional Engineer exam for Software Engineering in 2013, thereby allowing Software Engineers to be licensed and recognized. NCEES ended the exam after April 2019 due to lack of participation. Mandatory licensing is currently still largely debated, and perceived as controversial.
The IEEE Computer Society and the ACM, the two main US-based professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge – 2004 Version, or SWEBOK, defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The most current version is SWEBOK v4. The IEEE also promulgates a "Software Engineering Code of Ethics".
=== Employment ===
There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016.
Many software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Many companies hire interns, often university or college students during a summer break, or externships. Specializations include analysts, architects, developers, testers, technical support, middleware analysts, project managers, software product managers, educators, and researchers.
Most software engineers and programmers work 40 hours a week, but about 15 percent of software engineers and 11 percent of programmers worked more than 50 hours a week in 2008. Potential injuries in these occupations are possible because like other workers who spend long periods sitting in front of a computer terminal typing at a keyboard, engineers and programmers are susceptible to eyestrain, back discomfort, Thrombosis, Obesity, and hand and wrist problems such as carpal tunnel syndrome.
==== United States ====
The U. S. Bureau of Labor Statistics (BLS) counted 1,365,500 software developers holding jobs in the U.S. in 2018. Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and many software engineers hold computer science degrees. The BLS estimates from 2023 to 2033 that computer software engineering would increase by 17%. This is down from the 2022 to 2032 BLS estimate of 25% for software engineering. And, is further down from their 30% 2010 to 2020 BLS estimate. Due to this trend, job growth may not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the United States would instead be outsourced to computer software engineers in countries such as India and other foreign countries. In addition, the BLS Job Outlook for Computer Programmers, the U.S. Bureau of Labor Statistics (BLS) Occupational Outlook predicts a decline of -7 percent from 2016 to 2026, a further decline of -9 percent from 2019 to 2029, a decline of -10 percent from 2021 to 2031. and then a decline of -11 percent from 2022 to 2032. Since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower. Furthermore, the ratio of women in many software fields has also been declining over the years as compared to other engineering fields. Then there is the additional concern that recent advances in Artificial Intelligence might impact the demand for future generations of Software Engineers. However, this trend may change or slow in the future as many current software engineers in the U.S. market flee the profession or age out of the market in the next few decades.
=== Certification ===
The Software Engineering Institute offers certifications on specific topics like security, process improvement and software architecture. IBM, Microsoft and other companies also sponsor their own certification examinations. Many IT certification programs are oriented toward specific technologies, and managed by the vendors of these technologies. These certification programs are tailored to the institutions that would employ people who use these technologies.
Broader certification of general software engineering skills is available through various professional societies. As of 2006, the IEEE had certified over 575 software professionals as a Certified Software Development Professional (CSDP). In 2008 they added an entry-level certification known as the Certified Software Development Associate (CSDA). The ACM had a professional certification program in the early 1980s, which was discontinued due to lack of interest. The ACM and the IEEE Computer Society together examined the possibility of licensing of software engineers as Professional Engineers in the 1990s,
but eventually decided that such licensing was inappropriate for the professional industrial practice of software engineering. John C. Knight and Nancy G. Leveson presented a more balanced analysis of the licensing issue in 2002.
In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP), available to fully qualified members (MBCS). Software engineers may be eligible for membership of the British Computer Society or Institution of Engineering and Technology and so qualify to be considered for Chartered Engineer status through either of those institutions. In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP). In Ontario, Canada, Software Engineers who graduate from a Canadian Engineering Accreditation Board (CEAB) accredited program, successfully complete PEO's (Professional Engineers Ontario) Professional Practice Examination (PPE) and have at least 48 months of acceptable engineering experience are eligible to be licensed through the Professional Engineers Ontario and can become Professional Engineers P.Eng. The PEO does not recognize any online or distance education however; and does not consider Computer Science programs to be equivalent to software engineering programs despite the tremendous overlap between the two. This has sparked controversy and a certification war. It has also held the number of P.Eng holders for the profession exceptionally low. The vast majority of working professionals in the field hold a degree in CS, not SE. Given the difficult certification path for holders of non-SE degrees, most never bother to pursue the license.
=== Impact of globalization ===
The initial impact of outsourcing, and the relatively lower cost of international human resources in developing third world countries led to a massive migration of software development activities from corporations in North America and Europe to India and later: China, Russia, and other developing countries. This approach had some flaws, mainly the distance / time zone difference that prevented human interaction between clients and developers and the massive job transfer. This had a negative impact on many aspects of the software engineering profession. For example, some students in the developed world avoid education related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers. Although statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected. Nevertheless, the ability to smartly leverage offshore and near-shore resources via the follow-the-sun workflow has improved the overall operational capability of many organizations. When North Americans leave work, Asians are just arriving to work. When Asians are leaving work, Europeans arrive to work. This provides a continuous ability to have human oversight on business-critical processes 24 hours per day, without paying overtime compensation or disrupting a key human resource, sleep patterns.
While global outsourcing has several advantages, global – and generally distributed – development can run into serious difficulties resulting from the distance between developers. This is due to the key elements of this type of distance that have been identified as geographical, temporal, cultural and communication (that includes the use of different languages and dialects of English in different locations). Research has been carried out in the area of global software development over the last 15 years and an extensive body of relevant work published that highlights the benefits and problems associated with the complex activity. As with other aspects of software engineering research is ongoing in this and related areas.
=== Prizes ===
There are various prizes in the field of software engineering:
ACM-AAAI Allen Newell Award- USA. Awarded to career contributions that have breadth within computer science, or that bridge computer science and other disciplines.
BCS Lovelace Medal. Awarded to individuals who have made outstanding contributions to the understanding or advancement of computing.
ACM SIGSOFT Outstanding Research Award, selected for individual(s) who have made "significant and lasting research contributions to the theory or practice of software engineering."
More ACM SIGSOFT Awards.
The Codie award, a yearly award issued by the Software and Information Industry Association for excellence in software development within the software industry.
Harlan Mills Award for "contributions to the theory and practice of the information sciences, focused on software engineering".
ICSE Most Influential Paper Award.
Jolt Award, also for the software industry.
Stevens Award given in memory of Wayne Stevens.
== Criticism ==
Some call for licensing, certification and codified bodies of knowledge as mechanisms for spreading the engineering knowledge and maturing the field.
Some claim that the concept of software engineering is so new that it is rarely understood, and it is widely misinterpreted, including in software engineering textbooks, papers, and among the communities of programmers and crafters.
Some claim that a core issue with software engineering is that its approaches are not empirical enough because a real-world validation of approaches is usually absent, or very limited and hence software engineering is often misinterpreted as feasible only in a "theoretical environment."
Edsger Dijkstra, a founder of many of the concepts in software development today, rejected the idea of "software engineering" up until his death in 2002, arguing that those terms were poor analogies for what he called the "radical novelty" of computer science:
A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot."
== See also ==
=== Study and practice ===
Computer science
Data engineering
Software craftsmanship
Software development
Release engineering
=== Roles ===
Programmer
Systems analyst
Systems architect
=== Professional aspects ===
Bachelor of Science in Information Technology
Bachelor of Software Engineering
List of software engineering conferences
List of computer science journals (including software engineering journals)
Software Engineering Institute
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Pierre Bourque; Richard E. (Dick) Fairley, eds. (2014). Guide to the Software Engineering Body of Knowledge Version 3.0 (SWEBOK). IEEE Computer Society.
Roger S. Pressman; Bruce Maxim (January 23, 2014). Software Engineering: A Practitioner's Approach (8th ed.). McGraw-Hill. ISBN 978-0-07-802212-8.
Ian Sommerville (March 24, 2015). Software Engineering (10th ed.). Pearson Education Limited. ISBN 978-0-13-394303-0.
Jalote, Pankaj (2005) [1991]. An Integrated Approach to Software Engineering (3rd ed.). Springer. ISBN 978-0-387-20881-7.
Bruegge, Bernd; Dutoit, Allen (2009). Object-oriented software engineering : using UML, patterns, and Java (3rd ed.). Prentice Hall. ISBN 978-0-13-606125-0.
Oshana, Robert (2019-06-21). Software engineering for embedded systems : methods, practical techniques, and applications (Second ed.). Kidlington, Oxford, United Kingdom. ISBN 978-0-12-809433-4.
== External links ==
Pierre Bourque; Richard E. Fairley, eds. (2004). Guide to the Software Engineering Body of Knowledge Version 3.0 (SWEBOK), https://www.computer.org/web/swebok/v3. IEEE Computer Society.
The Open Systems Engineering and Software Development Life Cycle Framework Archived 2010-07-18 at the Wayback Machine OpenSDLC.org the integrated Creative Commons SDLC
Software Engineering Institute Carnegie Mellon | Wikipedia/software_engineering |
Software engineers make up a significant portion of the global workforce. As of 2022, there are an estimated 26.9 million professional software engineers worldwide, up from 21 million in 2016.
== By country ==
=== United States ===
In 2023, there were an estimated 1.6 million professional software developers in North America. There are 166 million people employed in the US workforce, making software developers 0.96% of the total workforce.
==== Summary ====
==== Software engineers vs. traditional engineers ====
The following two tables compare the number of software engineers (611,900 in 2002) versus the number of traditional engineers (1,157,020 in 2002).
There are another 1,500,000 people in system analysis, system administration, and computer support, many of whom might be called software engineers. Many systems analysts manage software development teams, and as analysis is an important software engineering role, many of them may be considered software engineers in the near future. This means that the number of software engineers may actually be much higher.
It is important to note that the number of software engineers declined by 5 to 10 percent from 2000 to 2002.
==== Computer managers vs. construction and engineering managers ====
Computer and information system managers (264,790) manage software projects, as well as computer operations. Similarly, Construction and engineering managers (413,750) oversee engineering projects, manufacturing plants, and construction sites. Computer management is 64% the size of construction and engineering management.
==== Software engineering educators vs. engineering educators ====
Most people working in the field of computer science, whether making software systems (software engineering) or studying the theoretical and mathematical facts of software systems (computer science), acquire degrees in computer science. The data shows that the combined number of chemistry and physics educators (29,610) nearly equals the number of engineering educators (29,310). It is estimated that roughly half of computer science educators emphasize the practical (software engineering), and the other half emphasize the theoretical (computer science). This means that software engineering education is 56% the size of traditional engineering education. There are more computer science educators than chemistry and physics educators combined, or engineering educators.
==== Other software and engineering roles ====
==== Relation to IT demographics ====
Software engineers are part of the much larger software, hardware, application, and operations community. In 2000 in the U.S., there were about 680,000 software engineers and about 10,000,000 IT workers.
There are no numbers on testers in the BLS data.
=== India ===
There has been a healthy growth in the number of India's IT professionals over the past few years. From a base of 6,800 knowledge workers in 1985–86, the number increased to 522,000 software and services professionals by the end of 2001–02. It is estimated that out of these 528,000 knowledge workers, almost 170,000 are working in the IT software and services export industry; nearly 106,000 are working in the IT enabled services and over 230,000 in user organizations.
=== Australia ===
In May 2024, the Australian government reported that 169,300 Australians are employed as software and applications programmers, 17% of who are women. The role grew annually by 8,300 workers.
=== Russia ===
According to the Russian government, the number of IT specialists in the country increased by 13% in 2023, reaching approximately 857,000. During the initial phase of the 2022 invasion of Ukraine, an estimated 100,000 IT specialists left Russia.
== See also ==
Software engineering
List of software engineering topics
Software engineering economics
Software engineering professionalism
== References == | Wikipedia/Software_engineering_demographics |
A Bachelor of Science in Information Technology (abbreviated BSIT or B.Sc. IT) is a bachelor's degree awarded for an undergraduate program in information technology. The degree is normally required in order to work in the information technology industry.
A Bachelor of Science in Information Technology (B.Sc. IT) degree program typically takes three to four years depending on the country. This degree is primarily focused on subjects such as software, databases, and networking.
The degree is a Bachelor of Science degree with institutions conferring degrees in the fields of information technology and related fields. This degree is awarded for completing a program of study in the field of software development, software testing, software engineering, web design, databases, programming, computer networking and computer systems.
Graduates with an information technology background are able to perform technology tasks relating to the processing, storing, and communication of information between computers, mobile phones, and other electronic devices. Information technology as a field emphasizes the secure management of large amounts of variable information and its accessibility via a wide variety of systems both local and worldwide.
== Skills taught ==
Generally, software and information technology companies look for people who have strong programming skills, system analysis, and software testing skills.
Many colleges teach practical skills that are crucial to becoming a software developer. As logical reasoning and critical thinking are important in becoming a software professional, this degree encompasses the complete process of software development from software design and development to final testing.
Students who complete their undergraduate education in software engineering at a satisfactory level often pursue graduate studies such as a Master of Science in Information Technology (M.Sc. IT) and sometimes continuing onto a doctoral program and earning a doctorate such as a Doctor of Information Technology (DIT).
== International variations ==
=== Bangladesh ===
In Bangladesh, the Bachelor of Engineering in Information Technology is awarded following a four-year course of study under the Dhaka University, Jahangirnagar University, Bangladesh University of Professionals, University of Information Technology and Sciences.
=== India ===
In India an engineering degree in Information Technology is 4 year academic program equivalent to Computer Science&Engineering because in the first year basic engineering subjects and Calculus are taught and in the succeeding years core computer science topics are taught in both B.Tech-IT and B.Tech-CSE.
=== Nepal ===
In Nepal, Bachelor of Science in Computer Science and Information Technology (B.Sc.CSIT ) is a four-year course of study. The Bachelor of Computer Science and Information Technology is provided by Tribhuvan University and the degree awarded is referred to as BScCSIT.
=== Philippines ===
In the Philippines, BSIT program normally takes 4 years to complete. Schools with trimester system has less time to complete this course. A total number of 486 hours was set by the CHED during internships of the program.
=== Thailand ===
In Thailand, the Bachelor of Science in Information Technology (BS IT) is a four-year undergraduate degree program which is a subject of accreditation by the Office of the Higher Education Commission (OHEC) and the Office for National Education Standards and Quality Assessment (ONESQA) of the Ministry of Higher Education, Science, Research and Innovation (MHESI).
The first international BS IT program, using English as a medium of instruction (EMI), has been established in 1990 at the Faculty of Science and Technology (renamed in 2013 to Vincent Mary School of Science and Technology (VMS)), Assumption University of Thailand (AU). The 2019 BS IT curriculum has been updated by VMS to respond to the discovery of the students' potential and also blended with marketing communications needs.
=== United States ===
In the United States, a B.S. in Information Technology is awarded after a four-year course of study. Some degree programs are accredited by the Computing Accreditation Commission of the Accreditation Board for Engineering and Technology (ABET).
=== United Arab Emirates ===
In UAE, Skyline University College offers 4 years Bachelor of Science in Information Technology enterprise computing.
Ajman University's Bachelor of Science in Information Technology programme provides students with a comprehensive understanding of computer science and technology, preparing them for careers in the Information Technology sector.
=== Kenya ===
In Kenya, the Bachelor of Science in Information Technology (BS IT) is a four-year undergraduate degree program which is a subject of accreditation by the Commision for University Education (CUE). In Kenya, the Bachelor of Science in Information Technology is awarded following a four-year course of study at institutions such as the University of Nairobi, Meru University of Science and Technology, Moi University, and Dedan Kimathi University of Technology.
== See also ==
Bachelor of Computing
Bachelor of Information Technology
Bachelor of Computer Science
Bachelor of Software Engineering
Bachelor of Computer Information Systems
== References == | Wikipedia/Bachelor_of_Science_in_Information_Technology |
A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
== Composition ==
The SDD usually contains the following information:
The Data-driven design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures.
The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules.
The interface design describes internal and external program interfaces, as well as the design of the human interface. Internal and external interface designs are based on the information obtained from the analysis model.
The procedural design describes structured programming concepts using graphical, tabular and textual notations.
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
== IEEE 1016 ==
IEEE 1016-2009, titled IEEE Standard for Information Technology—Systems Design—Software Design Descriptions, is an IEEE standard that specifies "the required information content and organization" for an SDD. IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions."
The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled after IEEE Std 1471-2000, Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts of view, viewpoint, stakeholder, and concern from architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016, Introduction]
Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use:
Context viewpoint
Composition viewpoint
Logical viewpoint
Dependency viewpoint
Information viewpoint
Patterns use viewpoint
Interface viewpoint
Structure viewpoint
Interaction viewpoint
State dynamics viewpoint
Algorithm viewpoint
Resource viewpoint
In addition, users of the standard are not limited to these viewpoints but may define their own.
== IEEE status ==
IEEE 1016-2009 is currently listed as 'Inactive - Reserved'.
== See also ==
Game design document
High-level design
Low-level design
== References ==
== External links ==
IEEE 1016 website | Wikipedia/Detailed_design_document |
A method in object-oriented programming (OOP) is a procedure associated with an object, and generally also a message. An object consists of state data and behavior; these compose an interface, which specifies how the object may be used. A method is a behavior of an object parametrized by a user.
Data is represented as properties of the object, and behaviors are represented as methods. For example, a Window object could have methods such as open and close, while its state (whether it is open or closed at any given point in time) would be a property.
In class-based programming, methods are defined within a class, and objects are instances of a given class. One of the most important capabilities that a method provides is method overriding - the same name (e.g., area) can be used for multiple different kinds of classes. This allows the sending objects to invoke behaviors and to delegate the implementation of those behaviors to the receiving object. A method in Java programming sets the behavior of a class object. For example, an object can send an area message to another object and the appropriate formula is invoked whether the receiving object is a rectangle, circle, triangle, etc.
Methods also provide the interface that other classes use to access and modify the properties of an object; this is known as encapsulation. Encapsulation and overriding are the two primary distinguishing features between methods and procedure calls.
== Overriding and overloading ==
Method overriding and overloading are two of the most significant ways that a method differs from a conventional procedure or function call. Overriding refers to a subclass redefining the implementation of a method of its superclass. For example, findArea may be a method defined on a shape class, triangle, etc. would each define the appropriate formula to calculate their area. The idea is to look at objects as "black boxes" so that changes to the internals of the object can be made with minimal impact on the other objects that use it. This is known as encapsulation and is meant to make code easier to maintain and re-use.
Method overloading, on the other hand, refers to differentiating the code used to handle a message based on the parameters of the method. If one views the receiving object as the first parameter in any method then overriding is just a special case of overloading where the selection is based only on the first argument. The following simple Java example illustrates the difference:
== Accessor, mutator and manager methods ==
Accessor methods are used to read the data values of an object. Mutator methods are used to modify the data of an object. Manager methods are used to initialize and destroy objects of a class, e.g. constructors and destructors.
These methods provide an abstraction layer that facilitates encapsulation and modularity. For example, if a bank-account class provides a getBalance() accessor method to retrieve the current balance (rather than directly accessing the balance data fields), then later revisions of the same code can implement a more complex mechanism for balance retrieval (e.g., a database fetch), without the dependent code needing to be changed. The concepts of encapsulation and modularity are not unique to object-oriented programming. Indeed, in many ways the object-oriented approach is simply the logical extension of previous paradigms such as abstract data types and structured programming.
=== Constructors ===
A constructor is a method that is called at the beginning of an object's lifetime to create and initialize the object, a process called construction (or instantiation). Initialization may include an acquisition of resources. Constructors may have parameters but usually do not return values in most languages. See the following example in Java:
=== Destructor ===
A Destructor is a method that is called automatically at the end of an object's lifetime, a process called Destruction. Destruction in most languages does not allow destructor method arguments nor return values. Destructors can be implemented so as to perform cleanup chores and other tasks at object destruction.
==== Finalizers ====
In garbage-collected languages, such as Java,: 26, 29 C#,: 208–209 and Python, destructors are known as finalizers. They have a similar purpose and function to destructors, but because of the differences between languages that utilize garbage-collection and languages with manual memory management, the sequence in which they are called is different.
== Abstract methods ==
An abstract method is one with only a signature and no implementation body. It is often used to specify that a subclass must provide an implementation of the method, as in an abstract class. Abstract methods are used to specify interfaces in some programming languages.
=== Example ===
The following Java code shows an abstract class that needs to be extended:
The following subclass extends the main class:
=== Reabstraction ===
If a subclass provides an implementation for an abstract method, another subclass can make it abstract again. This is called reabstraction.
In practice, this is rarely used.
==== Example ====
In C#, a virtual method can be overridden with an abstract method. (This also applies to Java, where all non-private methods are virtual.)
Interfaces' default methods can also be reabstracted, requiring subclasses to implement them. (This also applies to Java.)
== Class methods ==
Class methods are methods that are called on a class rather than an instance. They are typically used as part of an object meta-model. I.e, for each class, defined an instance of the class object in the meta-model is created. Meta-model protocols allow classes to be created and deleted. In this sense, they provide the same functionality as constructors and destructors described above. But in some languages such as the Common Lisp Object System (CLOS) the meta-model allows the developer to dynamically alter the object model at run time: e.g., to create new classes, redefine the class hierarchy, modify properties, etc.
== Special methods ==
Special methods are very language-specific and a language may support none, some, or all of the special methods defined here. A language's compiler may automatically generate default special methods or a programmer may be allowed to optionally define special methods. Most special methods cannot be directly called, but rather the compiler generates code to call them at appropriate times.
=== Static methods ===
Static methods are meant to be relevant to all the instances of a class rather than to any specific instance. They are similar to static variables in that sense. An example would be a static method to sum the values of all the variables of every instance of a class. For example, if there were a Product class it might have a static method to compute the average price of all products.
A static method can be invoked even if no instances of the class exist yet. Static methods are called "static" because they are resolved at compile time based on the class they are called on and not dynamically as in the case with instance methods, which are resolved polymorphically based on the runtime type of the object.
==== Examples ====
===== In Java =====
In Java, a commonly used static method is:
Math.max(double a, double b)
This static method has no owning object and does not run on an instance. It receives all information from its arguments.
=== Copy-assignment operators ===
Copy-assignment operators define actions to be performed by the compiler when a class object is assigned to a class object of the same type.
=== Operator methods ===
Operator methods define or redefine operator symbols and define the operations to be performed with the symbol and the associated method parameters. C++ example:
== Member functions in C++ ==
Some procedural languages were extended with object-oriented capabilities to leverage the large skill sets and legacy code for those languages but still provide the benefits of object-oriented development. Perhaps the most well-known example is C++, an object-oriented extension of the C programming language. Due to the design requirements to add the object-oriented paradigm on to an existing procedural language, message passing in C++ has some unique capabilities and terminologies. For example, in C++ a method is known as a member function. C++ also has the concept of virtual functions which are member functions that can be overridden in derived classes and allow for dynamic dispatch.
=== Virtual functions ===
Virtual functions are the means by which a C++ class can achieve polymorphic behavior. Non-virtual member functions, or regular methods, are those that do not participate in polymorphism.
C++ Example:
== See also ==
Property (programming)
Remote method invocation
Subroutine, also called subprogram, routine, procedure or function
== Notes ==
== References == | Wikipedia/Class_method |
Open energy system database projects employ open data methods to collect, clean, and republish energy-related datasets for open use. The resulting information is then available, given a suitable open license, for statistical analysis and for building numerical energy system models, including open energy system models. Permissive licenses like Creative Commons CC0 and CC BY are preferred, but some projects will house data made public under market transparency regulations and carrying unqualified copyright.
The databases themselves may furnish information on national power plant fleets, renewable generation assets, transmission networks, time series for electricity loads, dispatch, spot prices, and cross-border trades, weather information, and similar. They may also offer other energy statistics including fossil fuel imports and exports, gas, oil, and coal prices, emissions certificate prices, and information on energy efficiency costs and benefits.
Much of the data is sourced from official or semi-official agencies, including national statistics offices, transmission system operators, and electricity market operators. Data is also crowdsourced using public wikis and public upload facilities. Projects usually also maintain a strict record of the provenance and version histories of the datasets they hold. Some projects, as part of their mandate, also try to persuade primary data providers to release their data under more liberal licensing conditions.
Two drivers favor the establishment of such databases. The first is a wish to reduce the duplication of effort that accompanies each new analytical project as it assembles and processes the data that it needs from primary sources. And the second is an increasing desire to make public policy energy models more transparent to improve their acceptance by policymakers and the public. Better transparency dictates the use of open information, able to be accessed and scrutinized by third-parties, in addition to releasing the source code for the models in question.
== General considerations ==
=== Background ===
In the mid-1990s, energy models used structured text files for data interchange but efforts were being made to migrate to relational database management systems for data processing. These early efforts however remained local to a project and did not involve online publishing or open data principles.
The first energy information portal to go live was OpenEI in late 2009, followed by reegle in 2011.
A 2012 paper marks the first scientific publication to advocate the crowdsourcing of energy data. The 2012 PhD thesis by Chris Davis also discusses the crowdsourcing of energy data in some depth. A 2016 thesis surveyed the spatial (GIS) information requirements for energy planning and finds that most types of data, with the exception of energy expenditure data, are available but nonetheless remain scattered and poorly coordinated.
In terms of open data, a 2017 paper concludes that energy research has lagged behind other fields, most notably physics, biotechnology, and medicine.: 213–214 The paper also lists the benefits of open data and open models and discusses the reasons that many projects nonetheless remain closed.: 211–213 A one-page opinion piece from 2017 advances the case for using open energy data and modeling to build public trust in policy analysis. The article also argues that scientific journals have a responsibility to require that data and code be submitted alongside text for peer review.
=== Database design ===
Data models are central to the design and organization of databases. Open energy database projects generally try to develop and adhere to well resolved data models, using de facto and published standards where applicable. Some projects attempt to coordinate their data models in order to harmonize their data and improve its utility. Defining and maintaining suitable metadata is also a key issue. The life-cycle management of data includes, but is not limited to, the use of version control to track the provenance of incoming and cleansed data. Some sites allow users to comment on and rate individual datasets.
=== Dataset copyright and database rights ===
Issues surrounding copyright remain at the forefront with regard to open energy data. As noted, most energy datasets are collated and published by official or semi-official sources. But many of the publicly available energy datasets carry no license, limiting their reuse in numerical and statistical models, open or otherwise. Copyright protected material cannot lawfully be circulated, nor can it be modified and republished.
Measures to enforce market transparency have not helped much because the associated information is again not licensed to enable modification and republication. Transparency measures include the 2013 European energy market transparency regulation 543/2013. Indeed, 543/2013 "is only an obligation to publish, not an obligation to license".: slide 14 Notwithstanding, 543/2013 does enable downloaded data to be computer processed with legal certainty.: 5
Energy databases with hardware located with the European Union are protected under a general database law, irrespective of the legal status of the information they hold.
Database rights not waived by public sector providers significantly restrict the amount of data a user can lawfully access.
A December 2017 submission by energy researchers in Germany and elsewhere highlighted a number of concerns over the re-use of public sector information within the Europe Union.
The submission drew heavily on a recent legal opinion covering electricity data.
=== Energy statistics ===
National and international energy statistics are published regularly by governments and international agencies, such as the IEA. In 2016 the United Nations issued guidelines for energy statistics. While the definitions and sectoral breakdowns are useful when defining models, the information provided is rarely in sufficient detail to enable its use in high-resolution energy system models.: 213
=== Published standards ===
There are few published standards covering the collection and structuring of high-resolution energy system data. The IEC Common Information Model (CIM) defines data exchange protocols for low and high voltage electricity networks.
=== Non-open data ===
Although this page is about genuinely open data, some important databases remain closed.
Data collected by the International Energy Agency (IEA) is widely quoted in policy studies but remains nonetheless paywalled.
Researchers at Oxford University have called for this situation to change.
== Open energy system database projects ==
Energy system models are data intensive and normally require detailed information from a number of sources. Dedicated projects to collect, collate, document, and republish energy system datasets have arisen to service this need. Most database projects prefer open data, issued under free licenses, but some will accept datasets with proprietary licenses in the absence of other options.
The OpenStreetMap project, which uses the Open Database License (ODbL), contains geographic information about energy system components, including transmission lines. Wikimedia projects such as Wikidata and Wikipedia have a growing set of information related to national energy systems, such as descriptions of individual power stations.: 156–159
The following table summarizes projects that specifically publish open energy system data. Some are general repositories while others (for instance, oedb) are designed to interact with open energy system models in real-time.
Three of the projects listed work with linked open data (LOD), a method of publishing structured data on the web so that it can be networked and subject to semantic queries. The overarching concept is termed the semantic web. Technically, such projects support RESTful APIs, RDF, and the SPARQL query language. A 2012 paper reviews the use of LOD in the renewable energy domain.
=== Climate Compatible Growth starter datasets ===
The Climate Compatible Growth (CCG) programme provides starter kits for the following 69 countries: Algeria, Angola, Argentina, Benin, Botswana, Bolivia, Brazil, Burkina Faso, Burundi, Cambodia, Cameroon, Central African Republic, Chad, Chile, Colombia, Côte d'Ivoire, Democratic Republic of Congo, Djibouti, Ecuador, Egypt, Equatorial Guinea, Eritrea, Eswatini, Ethiopia, Gabon, Gambia, Ghana, Guinea, Guinea-Bissau, Indonesia, Kenya, Laos, Lesotho, Liberia, Libya, Malawi, Malaysia, Mali, Mauritania, Morocco, Mozambique, Myanmar, Namibia, Niger, Nigeria, Papua New Guinea, Paraguay, Peru, Philippines, Republic of Congo, Republic of Korea, Rwanda, Senegal, Sierra Leone, Somalia, South Africa, South Sudan, Sudan, Taiwan, Tanzania, Thailand, Togo, Tunisia, Uganda, Uruguay, Venezuela, Viet Nam, Zambia, and Zimbabwe.
The datasets are hosted on the Zenodo science archive site, visit that site and search for "ccg starter kit".
=== Energy Research Data Portal for South Africa ===
The Energy Research Data Portal for South Africa is being developed by the Energy Research Centre, University of Cape Town, Cape Town, South Africa. Coverage includes South Africa and certain other African countries where the Centre undertakes projects. The website uses the CKAN open source data portal software. A number of data formats are supported, including CSV and XLSX. The site also offers an API for automated downloads. As of March 2017, the portal contained 65 datasets.
=== energydata.info ===
The energydata.info project from the World Bank Group, Washington, DC, USA is an energy database portal designed to support national development by improving public access to energy information. As well as sharing data, the platform also offers tools to visualize and analyze energy data. Although the World Bank Group has made available a number of dataset and apps, external users and organizations are encouraged to contribute. The concepts of open data and open source development are central to the project. energydata.info uses its own fork of the CKAN open source data portal as its web-based platform. The Creative Commons CC BY 4.0 license is preferred for data but other open licenses can be deployed. Users are also bound by the terms of use for the site.
As of January 2017, the database held 131 datasets, the great majority related to developing countries. The datasets are tagged and can be easily filtered. A number of download formats, including GIS files, are supported: CSV, XLS, XLSX, ArcGIS, Esri, GeoJSON, KML, and SHP. Some datasets are also offered as HTML. Again, as of January 2017, four apps are available. Some are web-based and run from a browser.
=== Enipedia ===
The semantic wiki-site and database Enipedia lists energy systems data worldwide. Enipedia is maintained by the Energy and Industry Group, Faculty of Technology, Policy and Management, Delft University of Technology, Delft, the Netherlands. A key tenet of Enipedia is that data displayed on the wiki is not trapped within the wiki, but can be extracted via SPARQL queries and used to populate new tools. Any programming environment that can download content from a URL can be used to obtain data. Enipedia went live in March 2011, judging by traffic figures quoted by Davis.: 185 : fig 9.17
A 2010 study describes how community driven data collection, processing, curation, and sharing is revolutionizing the data needs of industrial ecology and energy system analysis. A 2012 chapter introduces a system of systems engineering (SoSE) perspective and outlines how agent-based models and crowdsourced data can contribute to the solving of global issues.
As of April 2019, the site has gone offline pending a move to the enipedia.org domain.
=== Open Energy Platform ===
The Open Energy Platform (OEP) is a collaborative versioned dataset repository for storing open energy system model datasets. A dataset is presumed to be in the form of a database table, together with metadata. Registered users can upload and download datasets manually using a web-interface or programmatically via an API using HTTP POST calls. Uploaded datasets are screened for integrity using deterministic rules and then subject to confirmation by a moderator. The use of versioning means that any prior state of the database can be accessed (as recommended in this 2012 paper). Hence, the repository is specifically designed to interoperate with energy system models. The backend is a PostgreSQL object-relational database under subversion version control. Open-data licenses are specific to each dataset. Unlike other database projects, users can download the current version (the public tables) of the entire PostgreSQL database or any previous version. The development is being led by a cross-project community.
=== Open Data Energy Networks ===
The Open Data Energy Networks (Open Data Réseaux Énergies or ODRÉ) portal is run by eight partners, led by the French national transmission system operator (TSO) Réseau de Transport d'Électricité (RTE). The portal was previously known as Open Data RTE. The site offers electricity system datasets under a Creative Commons CC BY 2.0 compatible license, with metadata, an RSS feed for notifying updates, and an interface for submitting questions. Re-users of information obtained from the site can also register third-party URLs (be they publications or webpages) against specific datasets.
The portal uses the French Government Licence Ouverte license and this is explicitly compatible with the United Kingdom Open Government Licence (OGL), the Creative Commons CC BY 2.0 license (and thereby later versions), and the Open Data Commons ODC-BY license.: 2
The site hosts electricity, gas, and weather information related to France.
=== UK Power Networks Open Data Portal ===
The Open Data Portal is run by UK Power Networks, a GB Distribution Network Operator (DNO), hosted on the OpenDataSoft platform. The Portal offers electricity network datasets under a Creative Commons CC BY 4.0 compatible license, with metadata, a newsfeed, and a data request form. Re-users of information obtained from the site can also register third-party URLs (be they publications or webpages) against specific datasets. A number of download formats, including GIS files, are supported: CSV, XLS, GeoJSON, KML, and SHP. The site also offers an API for automated downloads.
The portal uses the Creative Commons License and also hosts datasets from other sources which are licensed under the Open Government Licence (OGL).
The site hosts electricity datasets related to UK Power Networks' three license areas in London, the East and South East of England.
=== Open Power System Data ===
The Open Power System Data (OPSD) project seeks to characterize the German and western European power plant fleets, their associated transmission network, and related information and to make that data available to energy modelers and analysts. The platform was originally implemented by the University of Flensburg, DIW Berlin, Technische Universität Berlin, and the energy economics consultancy Neon Neue Energieökonomik, all from Germany. The first phase of the project, from August 2015 to July 2017, was funded by the Federal Ministry for Economic Affairs and Energy (BMWi) for €490000. The project later received funding for a second phase, from January 2018 to December 2020, with ETH Zurich replacing Flensburg University as a partner.
Developers collate and harmonize data from a range of government, regulatory, and industry sources throughout Europe. The website and the metadata utilize English, whereas the original material can be in any one of 24 languages. Datasets follow the emerging frictionless data package standard being developed by Open Knowledge Foundation (OKF). The website was launched on 28 October 2016. As of June 2018, the project offers the following primary packages, for Germany and other European countries:
details, including geolocation, of conventional power plants and renewable energy power plants
aggregated generation capacity by technology and country
hourly time series covering electrical load, day-ahead electricity spot prices, and wind and solar resources
a script to filter and download NASA MERRA-2 satellite weather data
In addition, the project hosts selected contributed packages:
electricity demand and self-generation time series for representative south German households
simulated PV and wind generation capacity factor time series for Europe, generated by the Renewables.ninja project
To facilitate analysis, the data is aggregated into large structured files (in CSV format) and loaded into data packages with standardized machine-readable metadata (in JSON format). The same data is usually also provided as XLSX (Excel) and SQLite files. The datasets can be accessed in real-time using stable URLs. The Python scripts deployed for data processing are available on GitHub and carry an MIT license. The licensing conditions for the data itself depends on the source and varies in terms of openness. Previous versions of the datasets and scripts can be recovered in order to track changes or replicate earlier studies. The project also engages with energy data providers, such as transmission system operators (TSO) and ENTSO-E, to encourage them to make their data available under open licenses (for instance, Creative Commons and ODbL licenses).
In a 2019 publication, OPSD developers describe their design choices, implementation, and provisioning. Information integrity remains key, with each data package having traceable provenance, curation, and packing. From October 2018, each new or revised data package is assigned a unique DOI to ensure that external references to current and prior versions remain stable.
A number of published electricity market modeling analyses are based on OPSD data.
In 2017, the Open Power System Data project won the Schleswig-Holstein Open Science Award and the Germany Land of Ideas award.
=== OpenEI ===
Open Energy Information (OpenEI) is a collaborative website, run by the US government, providing open energy data to software developers, analysts, users, consumers, and policymakers. The platform is sponsored by the United States Department of Energy (DOE) and is being developed by the National Renewable Energy Laboratory (NREL). OpenEI launched on 9 December 2009. While much of its data is from US government sources, the platform is intended to be open and global in scope.
OpenEI provides two mechanisms for contributing structured information: a semantic wiki (using MediaWiki and the Semantic MediaWiki extension) for collaboratively-managed resources and a dataset upload facility for contributor-controlled resources. US government data is distributed under a CC0 public domain dedication, whereas other contributors are free to select an open data license of their choice. Users can rate data using a five-star system, based on accessibility, adaptability, usefulness, and general quality. Individual datasets can be manually downloaded in an appropriate format, often as CSV files. Scripts for processing data can also be shared through the site. In order to build a community around the platform, a number of forums are offered covering energy system data and related topics.
Most of the data on OpenEI is exposed as linked open data (LOD) (described elsewhere on this page). OpenEI also uses LOD methods to populate its definitions throughout the wiki with real-time connections to DBPedia, reegle, and Wikipedia.: 46–49
OpenEI has been used to classify geothermal resources in the United States. And to publicize municipal utility rates, again within the US.
=== OpenGridMap ===
OpenGridMap employs crowdsourcing techniques to gather detailed data on electricity network components and then infer a realistic network structure using methods from statistics and graph theory. The scope of the project is worldwide and both distribution and transmission networks can be reverse engineered. The project is managed by the Chair of Business Information Systems, TUM Department of Informatics, Technical University of Munich, Munich, Germany. The project maintains a website and a Facebook page and provides an Android mobile app to help the public document electrical devices, such as transformers and substations. The bulk of the data is being made available under a Creative Commons CC BY 3.0 IGO license. The processing software is written primarily in Python and MATLAB and is hosted on GitHub.
OpenGridMap provides a tailored GIS web application, layered on OpenStreetMap, which contributors can use to upload and edit information directly. The same database automatically stores field recordings submitted by the mobile app. Subsequent classification by experts allows normal citizens to document and photograph electrical components and have them correctly identified. The project is experimenting with the use of hobby drones to obtain better information on associated facilities, such as photovoltaic installations. Transmission line data is also sourced from and shared with OpenStreetMap. Each component record is verified by a moderator.
Once sufficient data is available, the transnet software is run to produce a likely network, using statistical correlation, Voronoi partitioning, and minimum spanning tree (MST) algorithms. The resulting network can be exported in CSV (separate files for nodes and lines), XML, and CIM formats. CIM models are well suited for translation into software-specific data formats for further analysis, including power grid simulation. Transnet also displays descriptive statistics about the resulting network for visual confirmation.: 3–5
The project is motivated by the need to provide datasets for high-resolution energy system models, so that energy system transitions (like the German Energiewende) can be better managed, both technically and policy-wise. The rapid expansion of renewable generation and the anticipated uptake of electric vehicles means that electricity system models must increasingly represent distribution and transmission networks in some detail.
As of 2017, OpenGridMap techniques have been used to estimate the low voltage network in the German city of Garching and to estimate the high voltage grids in several other countries.
=== Power Explorer ===
The Power Explorer portal is a part of the larger Resource Watch platform, hosted by the World Resources Institute. The initial Global Power Plant Database, an open source database of the power plants globally, was released in April 2018. As of May 2021, the portal itself is still under development.
Power Explorer is also supported by Google with various research partners, including KTH, Global Energy Observatory, Enipedia, and OPSD.
=== PowerGenome ===
The PowerGenome project aims to provide a coherent dataset covering the United States electricity system. PowerGenome was initially designed to service the GenX model, but support for other modeling frameworks is in planning. The PowerGenome utility also pulls from upstream datasets hosted by the Public Utility Data Liberation project (PUDL) and the EIA, so those dependencies need to be met by users. Datasets are occasionally archived on Zenodo. A video describing the project is available.
=== reegle ===
reegle is a clean energy information portal covering renewable energy, energy efficiency, and climate compatible development topics.: 41 reegle was launched in 2006 by REEEP and REN21 with funding from the Dutch (VROM), German (BMU), and UK (Defra) environment ministries. Originally released as a specialized internet search engine, reegle was relaunched in 2011 as an information portal.
reegle offers and utilizes linked open data (LOD) (described elsewhere on this page).: 43–46 Sources of data include UN and World Bank databases, as well as dedicated partners around the world. reegle maintains a comprehensive structured glossary (driven by an LOD-compliant thesaurus) of energy and climate compatible development terms to assist with the tagging of datasets. The glossary also facilitates intelligent web searches.: 191, 193
reegle offers country profiles which collate and display energy data on a per-country basis for most of the world. These profiles are kept current automatically using LOD techniques.: 193–194 As of 2021, the portal is no longer active.
=== Renewables.ninja ===
Renewables.ninja is a website that can calculate the hourly power output from solar photovoltaic installations and wind farms located anywhere in the world. The website is a joint project between the Department of Environmental Systems Science, ETH Zurich, Zürich, Switzerland and the Centre for Environmental Policy, Imperial College London, London, United Kingdom. The website went live during September 2016. The resulting time series are provided under a Creative Commons CC BY-NC 4.0 license (which is unfortunately not open data conformant) and the underlying power plant models are published using a BSD-new license. As of February 2017, only the solar model, written in Python, has been released.
The project relies on weather data derived from meteorological reanalysis models and weather satellite images. More specifically, it uses the 2016 MERRA-2 reanalysis dataset from NASA and satellite images from CM-SAF SARAH. For locations in Europe, this weather data is further "corrected" by country so that it better fits with the output from known PV installations and windfarms. Two 2016 papers describe the methods used in detail in relation to Europe. The first covers the calculation of PV power. And the second covers the calculation of wind power.
The website displays an interactive world map to aid the selection of a site. Users can then choose a plant type and enter some technical characteristics. As of February 2017, only year 2014 data can be served, due to technical restrictions. The results are automatically plotted and are available for download in hourly CSV format with or without the associated weather information. The site offers an API for programmatic dataset recovery using token-based authorization. Examples deploying cURL and Python are provided.
A number of studies have been undertaking using the power production datasets underpinning the website (these studies predate the launch of the website), with the bulk focusing on energy options for Great Britain.
=== SMARD ===
The SMARD site (pronounced "smart") serves electricity market data from Germany, Austria, and Luxembourg and also provides visual information. The electricity market plots and their underlying time series are released under a permissive CC BY 4.0 license. The site itself was launched on 3 July 2017 in German and an English translation followed shortly. The data portal is mandated under the German Energy Industry Act (Energiewirtschaftsgesetz or EnWG) section §111d, introduced as an amendment on 13 October 2016. Four table formats are offered: CSV, XLS, XML, and PDF. The maximum sampling resolution is 15 min. Market data visuals or plots can be downloaded in PDF, SVG, PNG, and JPG formats. Representative output is shown in the thumbnail (on the left), in this case mid-winter dispatch over two days for the whole of Germany. The horizontal ordering by generation type is first split into renewable and conventional generation and then based on merit. A user guide is updated as required.
== See also ==
Comprehensive Knowledge Archive Network (CKAN) – a web-based open data management system
Climate change mitigation scenarios
Crowdsourcing
Energy modeling – the process of building computer models of energy systems
Energy system – the interpretation of the energy sector in system terms
Open Energy Modelling Initiative – a European-based energy modeling community
Open energy system models – a review of energy system models that are also open source
Open Knowledge Foundation – a global non-profit network that promotes and shares information
== Notes ==
== References ==
== Further information ==
Open energy data wiki maintained by the Open Energy Modelling Initiative
De Felice, Matteo (2020). "Freely available datasets of energy variables". openmod forum. Open Energy Modelling Initiative. Retrieved 1 December 2020. The list is under a Creative Commons CC‑BY‑4.0 license and many of the datasets cited are similarly licensed.
== External links ==
De-risking Energy Efficiency Platform (DEEP) – an open energy efficiency data platform for Europe
European Climatic Energy Mixes project (ECEM) — the role that climate change may play on future energy systems
OpenEnergy Database (oedb) – an open energy system database being developed in Germany
OpenEnergyMonitor – an open source energy use monitoring project
Domain‑wide data projects – a list of data related projects designed to support open energy system modeling | Wikipedia/Open_energy_system_databases |
Jay Wright Forrester (July 14, 1918 – November 16, 2016) was an American computer engineer, management theorist and systems scientist. He spent his entire career at Massachusetts Institute of Technology, entering as a graduate student in 1939, and eventually retiring in 1989.
During World War II Forrester worked on servomechanisms as a research assistant to Gordon S. Brown. After the war he headed MIT's Whirlwind digital computer project. There he is credited as a co-inventor of magnetic core memory, the predominant form of random-access computer memory during the most explosive years of digital computer development (between 1955 and 1975). It was part of a family of related technologies which bridged the gap between vacuum tubes and semiconductors by exploiting the magnetic properties of materials to perform switching and amplification. His team is also believed to have created the first animation in the history of computer graphics, a "jumping ball" on an oscilloscope.
Later, Forrester was a professor at the MIT Sloan School of Management, where he introduced the Forrester effect describing fluctuations in supply chains. He has been credited as a founder of system dynamics, which deals with the simulation of interactions between objects in dynamic systems. After his initial efforts in industrial simulation, Forrester attempted to simulate urban dynamics and then world dynamics, developing a model with the Club of Rome along the lines of the model popularized in The Limits to Growth. Today system dynamics is most often applied to research and consulting in organizations and other social systems.
== Early life and education ==
Forrester was born on a farm near Anselmo, Nebraska, where "his early interest in electricity was spurred, perhaps, by the fact that the ranch had none. While in high school, he built a wind-driven, 12-volt electrical system using old car parts—it gave the ranch its first electric power."
Forrester received his Bachelor of Science in Electrical Engineering in 1939 from the University of Nebraska–Lincoln. He went on to graduate school at the Massachusetts Institute of Technology, where he worked with servomechanism pioneer Gordon S. Brown and gained his master's in 1945 with a thesis on 'Hydraulic Servomechanism Developments'. In 1949 he was inducted into Eta Kappa Nu the Electrical & Computer Engineering Honor Society.
== Career ==
=== Whirlwind projects ===
During the late 1940s and early 50s, Forrester continued research in electrical and computer engineering at MIT, heading the Whirlwind project. Trying to design an aircraft simulator, the group moved away from an initial analog design to develop a digital computer. As a key part of this design, Forrester perfected and patented multi-dimensional addressable magnetic-core memory, the forerunner of today's RAM. In 1948-49 the Whirlwind team created the first animation in the history of computer graphics, a "jumping ball" on an oscilloscope. Whirlwind began operation in 1951, the first digital computer to operate in real time and to use video displays for output. It subsequently evolved into the air defence system Semi-Automatic Ground Environment (SAGE).
=== DEC board member ===
Forrester was invited to join the board of Digital Equipment Corporation by Ken Olsen in 1957, and advised the early company on management science. He left before 1966 due to changes in DEC to a product line led organisation.
=== Forrester effect ===
In 1956, Forrester moved to the MIT Sloan School of Management as Germeshausen professor. After his retirement, he continued until 1989 as Professor Emeritus and Senior Lecturer. In 1961 he published his seminal book, Industrial Dynamics, the first work in the field of System Dynamics. The work resulted from analyzing the operations of Sprague Electric in Massachusetts. The study was the first model of supply chains, showing in this case that inventory fluctuations were not due to external factors as thought, but rather to internal corporate dynamics that his continuous modelling approach could detect. The phenomenon, originally called the Forrester effect, is today more frequently described as the "bullwhip effect".
=== System dynamics ===
Forrester was the founder of system dynamics, which deals with the simulation of interactions between objects in dynamic systems. Industrial Dynamics was the first book Forrester wrote using system dynamics to analyze industrial business cycles. Several years later, interactions with former Boston Mayor John F. Collins led Forrester to write Urban Dynamics, which sparked an ongoing debate on the feasibility of modeling broader social problems. The book went on to influence the video game SimCity.
Forrester's 1971 paper 'Counterintuitive Behavior of Social Systems' argued that the use of computerized system models to inform social policy was superior to simple debate, both in generating insight into the root causes of problems and in understanding the likely effects of proposed solutions. He characterized normal debate and discussion as being dominated by inexact mental models:
The mental model is fuzzy. It is incomplete. It is imprecisely stated. Furthermore, within one individual, a mental model changes with time and even during the flow of a single conversation. The human mind assembles a few relationships to fit the context of a discussion. As the subject shifts so does the model. When only a single topic is being discussed, each participant in a conversation employs a different mental model to interpret the subject. Fundamental assumptions differ but are never brought into the open. Goals are different and are left unstated. It is little wonder that compromise takes so long. And it is not surprising that consensus leads to laws and programs that fail in their objectives or produce new difficulties greater than those that have been relieved.
The paper summarized the results of a previous study on the system dynamics governing the economies of urban centers, which showed "how industry, housing, and people interact with each other as a city grows and decays." The study's findings, presented more fully in Forrester's 1969 book Urban Dynamics, suggested that the root cause of depressed economic conditions was a shortage of job opportunities relative to the population level, and that the most popular solutions proposed at the time (e.g. increasing low-income housing availability, or reducing real estate taxes) counter-intuitively would worsen the situation by increasing this relative shortage. The paper further argued that measures to reduce the shortage—such as converting land use from housing to industry, or increasing real estate taxes to spur property redevelopment—would be similarly counter-effective.
=== Club of Rome ===
'Counterintuitive Behavior of Social Systems' also sketched a model of world dynamics that correlated population, food production, industrial development, pollution, availability of natural resources, and quality of life, and attempted future projections of those values under various assumptions. Forrester presented this model more fully in his 1971 book World Dynamics, notable for serving as the initial basis for the World3 model used by Donella and Dennis Meadows in their popular 1972 book The Limits to Growth.
Forrester met Aurelio Peccei, a founder of the Club of Rome in 1970. He later met with the Club of Rome to discuss issues surrounding global sustainability; the book World Dynamics followed. World Dynamics took on modeling the complex interactions of the world economy, population and ecology, which was controversial (see also Donella Meadows and The Limits to Growth). It was the start of the field of global modeling. Forrester continued working in applications of system dynamics and promoting its use in education.
== Awards ==
In 1972, Forrester received the IEEE Medal of Honor, IEEEs highest award.
In 1982, he received the IEEE Computer Pioneer Award. In 1995, he was made a Fellow of the Computer History Museum "for his perfecting of core memory technology into a practical computer memory device; for fundamental contributions to early computer systems design and development". In 2006, he was inducted into the Operational Research Hall of Fame.
== Publications ==
Forrester wrote several books, including:
Forrester, Jay W. (1961). Industrial Dynamics. M.I.T. Press.
1968. Principles of Systems, 2nd ed. Pegasus Communications.
1969. Urban Dynamics. Pegasus Communications.
1971. World Dynamics. Wright-Allen Press.
1975. Collected Papers of Jay W. Forrester. Pegasus Communications.
His articles and papers include:
1958. 'Industrial Dynamics – A Major Breakthrough for Decision Makers', Harvard Business Review, Vol. 36, No. 4, pp. 37–66.
1968, 'Market Growth as Influenced by Capital Investment', Industrial Management Review, Vol. IX, No. 2, Winter 1968.
1971, 'Counterintuitive Behavior of Social Systems', Theory and Decision, Vol. 2, December 1971, pp. 109–140. Also available online.
1989, 'The Beginning of System Dynamics'. Banquet Talk at the international meeting of the System Dynamics Society, Stuttgart, Germany, July 13, 1989. MIT System Dynamics Group Memo D.
1992, 'System Dynamics and Learner-Centered-Learning in Kindergarten through 12th Grade Education.'
1993, 'System Dynamics and the Lessons of 35 Years', in Kenyon B. Greene (ed.) A Systems-Based Approach to Policymaking, New York: Springer, pp. 199–240.
1996, 'System Dynamics and K–12 Teachers: a lecture at the University of Virginia School of Education'.
1998, 'Designing the Future'. Lecture at Universidad de Sevilla, December 15, 1998.
1999, 'System Dynamics: the Foundation Under Systems Thinking'. Cambridge, MA: Sloan School of Management.
2016, 'Learning through System Dynamics as preparation for the 21st Century', System Dynamics Review, Vol. 32, pp. 187–203.
== See also ==
DYNAMO (programming language)
Roger Sisson
== References ==
== External links ==
Selected papers by Forrester.
Jay Wright Forrester at the Mathematics Genealogy Project
Biography of Jay W. Forrester from the Institute for Operations Research and the Management Sciences
"The many careers of Jay Forrester," MIT Technology Review, June 23, 2015
Jay Wright Forrester Papers, MC 439, box X. Massachusetts Institute of Technology, Institute Archives and Special Collections, Cambridge, Massachusetts.
J. W. Forrester and the History of System Dynamics | Wikipedia/Urban_Dynamics |
Computer simulation is the running of a mathematical model on a computer, the model being designed to represent the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.
Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program.
Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005;
a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.
Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.
== Simulation versus model ==
A model consists of the equations used to capture the behavior of a system. By contrast, computer simulation is the actual running of the program that perform algorithms which solve those equations, often in an approximate manner. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model (or a simulator)", and then either "run the model" or equivalently "run a simulation".
== History ==
Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.
== Data preparation ==
The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).
Input sources also vary widely:
Sensors and other physical devices connected to the model;
Control surfaces used to direct the progress of the simulation in some way;
Current or historical data entered by hand;
Values extracted as a by-product from other processes;
Values output for the purpose by other simulations, models, or processes.
Lastly, the time at which data is available varies:
"invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
data can be provided during the simulation run, for example by a sensor network.
Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula. There are now many others.
Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis" to confirm that values output by the simulation will still be usefully accurate.
== Types ==
Models used for computer simulations can be classified according to several independent pairs of attributes, including:
Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
Steady-state or dynamic
Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
Local or distributed.
Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:
Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
If the underlying graph is not a regular grid, the model may belong to the meshfree method class.
For steady-state simulations, equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.
Dynamic simulations attempt to capture changes in a system in response to (usually changing) input signals.
Stochastic models use random number generators to model chance or random events;
A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next.
Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).
== Visualization ==
Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.
Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.
Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.
== In science ==
Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:
a numerical simulation of differential equations that cannot be solved analytically, theories that involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g., climate models, roadway noise models, roadway air dispersion models), continuum mechanics and chemical kinetics fall into this category.
a stochastic simulation, typically used for discrete systems where events occur probabilistically and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method).
multiparticle simulation of the response of nanomaterials at multiple scales to an applied force for the purpose of modeling their thermoelastic and thermodynamic properties. Techniques used for such simulations are Molecular dynamics, Molecular mechanics, Monte Carlo method, and Multiscale Green's function.
Specific examples of computer simulations include:
statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
computer simulation using molecular modeling for drug discovery.
computer simulation to model viral infection in mammalian cells.
computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.
Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.
Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.
In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling
== In practical contexts ==
Computer simulations are used in a wide variety of practical contexts, such as:
analysis of air pollutant dispersion using atmospheric dispersion modeling
As a possible humane alternative to live animal testing in respect to animal rights.
design of complex systems such as aircraft and also logistics systems.
design of noise barriers to effect roadway noise mitigation
modeling of application performance
flight simulators to train pilots
weather forecasting
forecasting of risk
simulation of electrical circuits
Power system simulation
simulation of other computers is emulation.
forecasting of prices on financial markets (for example Adaptive Modeler)
behavior of structures (such as buildings and industrial parts) under stress and other conditions
design of industrial processes, such as chemical processing plants
strategic management and organizational studies
reservoir simulation for the petroleum engineering to model the subsurface reservoir
process engineering simulation tools.
robot simulators for the design of robots and robot control algorithms
urban simulation models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies.
traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network to transportation system planning, design and operations. See a more detailed article on Simulation in Transportation.
modeling car crashes to test safety mechanisms in new vehicle models.
crop-soil systems in agriculture, via dedicated software frameworks (e.g. BioMA, OMS3, APSIM)
The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.
Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.
Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.
In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.
== Pitfalls ==
Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.
== See also ==
== References ==
== Further reading ==
Young, Joseph and Findley, Michael. 2014. "Computational Modeling to Study Conflicts and Terrorism." Routledge Handbook of Research Methods in Military Studies edited by Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. pp. 249–260. New York: Routledge,
R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy.
E. Winsberg Simulation in Science. Entry in the Stanford Encyclopedia of Philosophy.
S. Hartmann, The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77–100.
E. Winsberg, Science in the Age of Computer Simulation. Chicago: University of Chicago Press, 2010.
P. Humphreys, Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press, 2004.
James J. Nutaro (2011). Building Software for Simulation: Theory and Algorithms, with Applications in C++. John Wiley & Sons. ISBN 978-1-118-09945-2.
Desa, W. L. H. M., Kamaruddin, S., & Nawawi, M. K. M. (2012). Modeling of Aircraft Composite Parts Using Simulation. Advanced Material Research, 591–593, 557–560.
== External links ==
Guide to the Computer Simulation Oral History Archive 2003-2018 | Wikipedia/Computer_modeling |
Energy planning has a number of different meanings, but the most common meaning of the term is the process of developing long-range policies to help guide the future of a local, national, regional or even the global energy system. Energy planning is often conducted within governmental organizations but may also be carried out by large energy companies such as electric utilities or oil and gas producers. These oil and gas producers release greenhouse gas emissions. Energy planning may be carried out with input from different stakeholders drawn from government agencies, local utilities, academia and other interest groups.
Since 1973, energy modeling, on which energy planning is based, has developed significantly. Energy models can be classified into three groups: descriptive, normative, and futuristic forecasting.
Energy planning is often conducted using integrated approaches that consider both the provision of energy supplies and the role of energy efficiency in reducing demands (Integrated Resource Planning). Energy planning should always reflect the outcomes of population growth and economic development. There are also several alternative energy solutions which avoid the release of greenhouse gasses, like electrifying current machines and using nuclear energy. A unused energy plan for cities is created as a result of a careful investigation of the arranging prepare, which coordinating city arranging and vitality arranging together and gives energy arrangements for high-level cities and mechanical parks.
== Planning and market concepts ==
Energy planning has traditionally played a strong role in setting the framework for regulations in the energy sector (for example, influencing what type of power plants might be built or what prices were charged for fuels). But in the past two decades many countries have deregulated their energy systems so that the role of energy planning has been reduced, and decisions have increasingly been left to the market. This has arguably led to increased competition in the energy sector, although there is little evidence that this has translated into lower energy prices for consumers. Indeed, in some cases, deregulation has led to significant concentrations of "market power" with large very profitable companies having a large influence as price setters.
== Integrated resource planning ==
Approaches to energy planning depends on the planning agent and the scope of the exercise. Several catch-phrases are associated with energy planning. Basic to all is resource planning, i.e. a view of the possible sources of energy in the future. A forking in methods is whether the planner considers the possibility of influencing the consumption (demand) for energy. The 1970s energy crisis ended a period of relatively stable energy prices and stable supply-demand relation. Concepts of demand side management, least cost planning and integrated resource planning (IRP) emerged with new emphasis on the need to reduce energy demand by new technologies or simple energy saving.
== Sustainable energy planning ==
Further global integration of energy supply systems and local and global environmental limits amplifies the scope of planning both in subject and time perspective. Sustainable energy planning should consider environmental impacts of energy consumption and production, particularly in light of the threat of global climate change, which is caused largely by emissions of greenhouse gases from the world's energy systems, which is a long-term process.
The 2022 renewable energy industry outlook shows supportive policies from an administration focused on combatting climate change in 2022's political landscape aid an expected growth of the renewable energy industry Biden has argued in favor of developing the clean energy industry in the US and in the world to vigorously address climate change. President Biden expressed his intention to move away from the oil industry. 2022 administration calls for, "Plan for Climate Change and Environmental Justice", which aims to reach 100% carbon-free power generation by 2035 and net-zero emissions by 2050 in the USA.
Many OECD countries and some U.S. states are now moving to more closely regulate their energy systems. For example, many countries and states have been adopting targets for emissions of CO2 and other greenhouse gases. In light of these developments, broad scope integrated energy planning could become increasingly important
Sustainable Energy Planning takes a more holistic approach to the problem of planning for future energy needs. It is based on a structured decision making process based on six key steps, namely:
Exploration of the context of the current and future situation
Formulation of particular problems and opportunities which need to be addressed as part of the Sustainable Energy Planning process. This could include such issues as "peak oil" or "economic recession/depression", as well as the development of energy demand technologies.
Create a range of models to predict the likely impact of different scenarios. This traditionally would consist of mathematical modelling but is evolving to include "Soft System Methodologies" such as focus groups, peer ethnographic research, "what if" logical scenarios etc.
Based on the output from a wide range of modelling exercises and literature reviews, open forum discussion etc., the results are analysed and structured in an easily interpreted format.
The results are then interpreted to determine the scope, scale and likely implementation methodologies which would be required to ensure successful implementation.
This stage is a quality assurance process which actively interrogates each stage of the Sustainable Energy Planning process and checks if it has been carried out rigorously, without any bias and that it furthers the aims of sustainable development and does not act against them.
The last stage of the process is to take action. This may consist of the development, publication and implementation of a range of policies, regulations, procedures or tasks which together will help to achieve the goals of the Sustainable Energy Plan.
Designing for implementation is often carried out using "Logical Framework Analysis" which interrogates a proposed project and checks that it is completely logical, that it has no fatal errors and that appropriate contingency arrangements have been put in place to ensure that the complete project will not fail if a particular strand of the project fails.
Sustainable energy planning is particularly appropriate for communities who want to develop their own energy security, while employing best available practice in their planning processes.
== Energy planning tools (software) ==
Energy planning can be conducted on different software platforms and over various timespans and with different qualities of resolution (i.e very short divisions of time/space or very large divisions). There are multiple platforms available for all sorts of energy planning analysis, with focuses on different areas, and significant growth in terms of modeling software or platforms available in recent years. Energy planning tools can be identified as commercial, open source, educational, free, and as used by governments (often custom tools).
== Potential energy solutions ==
=== Electrification ===
One potential energy option is the move to electrify all machines that currently use fossil fuels for their energy source. There are already electric alternatives available such as electric cars, electric cooktops, and electric heat pumps, now these products need to be widely implemented to electrify and decarbonize our energy use. To reduce our dependence on fossil fuels and transfer to electric machines, it requires that all electricity be generated by renewable sources. As of 2020 60.3% of all energy generated in the United States came from fossil fuels, 19.7% came from nuclear energy, and 19.8% came from renewables. The United States is still heavily relying on fossil fuels as a source of energy. For the electrification of our machines to help the efforts to decarbonize, more renewable energy sources, such as wind and solar would have to be built.
Another potential problem that comes with the use of renewable energy is the energy transmission. A study conducted by Princeton University found that the locations with the highest renewable potential are in the Midwest, however, the places with the highest energy demand are coastal cities. To effectively make use of the electricity coming from these renewable sources, the U.S. electric grid would have to be nationalized, and more high voltage transmission lines would have to be built. The total amount of electricity that the grid would have to be able to accommodate has to increase. If more electric cars were being driven there would be a decline in gasoline demand and an increased demand for electricity, this increased demand for electricity would require our electric grids to be able to transport more energy at any given moment than is currently viable.
=== Nuclear Energy ===
Nuclear energy is sometimes considered to be a clean energy source. Nuclear energy's only associated carbon emission takes place during the process of mining for uranium, but the process of obtaining energy from uranium does not emit any carbon. A primary concern in using nuclear energy stems from the issue of what to do with radioactive waste. The highest level source of radioactive waste comes from the spent reactor fuel, the radioactive fuel decreases over time through radioactive decay. The time it takes for the radioactive waste to decay depends on the length of the substance's half-life. Currently, the United States does not have a permanent disposal facility for high-level nuclear waste.
Public support behind increasing nuclear energy production is an important consideration when planning for sustainable energy. Nuclear energy production has a complicated past. Multiple nuclear power plants having accidents or meltdowns has tainted the reputation of nuclear energy for many. A considerable section of the public is concerned about the health and environmental impacts of a nuclear power plant melting down, believing that the risk is not worth the reward. Though there is a portion of the population that believes expanding nuclear energy is necessary and that the threats of climate change far outweigh the possibility of a meltdown, especially considering the advancements in technology that have been made within recent decades.
== Global greenhouse gas emissions and energy production ==
The majority of global manmade greenhouse gas emissions is derived from the energy sector, contributing to 72.0% of global emissions. The majority of that energy goes toward producing electricity and heat (31.0%), the next largest contributor is agriculture (11%), followed by transportation (15%), forestry (6%) and manufacturing (12%). There are multiple different molecular compounds that fall under the classification of green house gases including, carbon dioxide, methane, and nitrous oxide. Carbon dioxide is the largest emitted greenhouse gas, making up 76% of global emission. Methane is the second largest emitted greenhouse gas at 16%, methane is primarily emitted from the agriculture industry. Lastly nitrous oxide makes up 6% of global emitted greenhouse gases, agriculture and industry are the largest emitters of nitrous oxide.
The challenges in the energy sector include the reliance on coal. Coal production remains key to the energy mix and global imports rely on coal to meet the growing demand for gas Energy planning evaluates the current energy situation and estimates future changes based on industrialization patterns and resource availability. Many of the future changes and solutions depend on the global effort to move away from coal and begin making energy efficient technology and continue to electrify the world.
== See also ==
Capacity factor – Electrical production measure
Wind power forecasting – Estimate of the expected production of one or more wind turbines
Wind energy software – Type of specialized software
Variable renewable energy#Intermittent energy source – Class of renewable energy sources
Wind resource assessment – process by which wind power developers estimate the future energy production of a wind farmPages displaying wikidata descriptions as a fallback
Virtual power plant – Cloud-based distributed power plant
Electricity#Generation and transmission – Phenomena related to electric charge
Transmission system operator – Energy transporter
Base load – Minimum level of demand on an electrical grid over a span of time
Merit order – Ranking of available sources of energy
Load factor (electrical) – The average power divided by the peak power over a period of time
Load following power plant – Power plant that adjusts output based on demandPages displaying short descriptions of redirect targets
Peak demand – Highest power demand on a grid in a specified period
== References ==
== External links ==
An online community for energy planners working on energy for sustainable development. Archived May 11, 2020, at the Wayback Machine
A masters education on Energy planning at Aalborg University in Denmark. | Wikipedia/Energy_planning |
The Open Energy Modelling Initiative (openmod) is a grassroots community of energy system modellers from universities and research institutes across Europe and elsewhere. The initiative promotes the use of open-source software and open data in energy system modelling for research and policy advice. The Open Energy Modelling Initiative documents a variety of open-source energy models and addresses practical and conceptual issues regarding their development and application. The initiative runs an email list, an internet forum, and a wiki and hosts occasional academic workshops. A statement of aims is available.
== Context ==
The application of open-source development to energy modelling dates back to around 2003. This section provides some background for the growing interest in open methods.
=== Growth in open energy modelling ===
Just two active open energy modelling projects were cited in a 2011 paper: OSeMOSYS and TEMOA.: 5861 Balmorel was also public at that time, having been made available on a website in 2001.
As of November 2016, the openmod wiki lists 24 such undertakings.
As of October 2021, the Open Energy Platform lists 17 open energy frameworks and about 50 open energy models.
=== Academic literature ===
This 2012 paper presents the case for using "open, publicly accessible software and data as well as crowdsourcing techniques to develop robust energy analysis tools".: 149 The paper claims that these techniques can produce high-quality results and are particularly relevant for developing countries.
There is an increasing call for the energy models and datasets used for energy policy analysis and advice to be made public in the interests of transparency and quality. A 2010 paper concerning energy efficiency modeling argues that "an open peer review process can greatly support model verification and validation, which are essential for model development".: 17 One 2012 study argues that the source code and datasets used in such models should be placed under publicly accessible version control to enable third-parties to run and check specific models. Another 2014 study argues that the public trust needed to underpin a rapid transition in energy systems can only be built through the use of transparent open-source energy models. The UK TIMES project (UKTM) is open source, according to a 2014 presentation, because "energy modelling must be replicable and verifiable to be considered part of the scientific process" and because this fits with the "drive towards clarity and quality assurance in the provision of policy insights".: 8 In 2016, the Deep Decarbonization Pathways Project (DDPP) is seeking to improve its modelling methodologies, a key motivation being "the intertwined goals of transparency, communicability and policy credibility.": S27 A 2016 paper argues that model-based energy scenario studies, wishing to influence decision-makers in government and industry, must become more comprehensible and more transparent. To these ends, the paper provides a checklist of transparency criteria that should be completed by modelers. The authors note however that they "consider open source approaches to be an extreme case of transparency that does not automatically facilitate the comprehensibility of studies for policy advice.": 4 An editorial from 2016 opines that closed energy models providing public policy support "are inconsistent with the open access movement [and] publically [sic] funded research".: 2 A 2017 paper lists the benefits of open data and models and the reasons that many projects nonetheless remain closed. The paper makes a number of recommendations for projects wishing to transition to a more open approach. The authors also conclude that, in terms of openness, energy research has lagged behind other fields, most notably physics, biotechnology, and medicine. Moreover:
Given the importance of rapid global coordinated action on climate mitigation and the clear benefits of shared research efforts and transparently reproducible policy analysis, openness in energy research should not be for the sake of having some code or data available on a website, but as an initial step towards fundamentally better ways to both conduct our research and engage decision-makers with [our] models and the assumptions embedded within them.: 214
A one-page opinion piece in Nature News from 2017 advances the case for using open energy data and modeling to build public trust in policy analysis. The article also argues that scientific journals have a responsibility to require that data and code be submitted alongside text for scrutiny, currently only Energy Economics makes this practice mandatory within the energy domain.
=== Copyright and open energy data ===
Issues surrounding copyright remain at the forefront with regard to open energy data. Most energy datasets are collated and published by official or semi-official sources, for example, national statistics offices, transmission system operators, and electricity market operators. The doctrine of open data requires that these datasets be available under free licenses (such as CC BY 4.0) or be in the public domain. But most published energy datasets carry proprietary licenses, limiting their reuse in numerical and statistical models, open or otherwise. Measures to enforce market transparency have not helped because the associated information is normally licensed to preclude downstream usage. Recent transparency measures include the 2013 European energy market transparency regulation 543/2013 and a 2016 amendment to the German Energy Industry Act to establish a nation energy information platform, slated to launch on 1 July 2017. Energy databases may also be protected under general database law, irrespective of the copyright status of the information they hold.
In December 2017, participants from the Open Energy Modelling Initiative and allied research communities made a written submission to the European Commission on the re-use of public sector information. The document provides a comprehensive account of the data issues faced by researchers engaged in open energy system modeling and energy market analysis and quoted extensively from a German legal opinion.
In May 2020, participants from the Open Energy Modelling Initiative made a further submission on the European strategy for data. In mid‑2021, participants made two written submissions on a proposed Data Act — legislative work-in-progress intended primarily to improve public interest business-to-government (B2G) information transfers within the European Economic Area (EEA). More specifically, the two Data Act submissions drew attention to restrictive but nonetheless compliant public disclosure reporting practices deployed by the European Energy Exchange (EEX).
=== Public policy support ===
In May 2016, the European Union announced that "all scientific articles in Europe must be freely accessible as of 2020". This is a step in the right direction, but the new policy makes no mention of open software and its importance to the scientific process. In August 2016, the United States government announced a new federal source code policy which mandates that at least 20% of custom source code developed by or for any agency of the federal government be released as open-source software (OSS). The US Department of Energy (DOE) is participating in the program. The project is hosted on a dedicated website and subject to a three-year pilot. Open-source campaigners are using the initiative to advocate that European governments adopt similar practices. In 2017 the Free Software Foundation Europe (FSFE) issued a position paper calling for free software and open standards to be central to European science funding, including the flagship EU program Horizon 2020. The position paper focuses on open data and open data processing and the question of open modeling is not traversed per se.
=== Adoption by regulators and industry generally ===
A trend evident by 2023 is the adoption of regulators within the European Union and North America. Fairley (2023), writing in the IEEE Spectrum publication, provides an overview. And as one example, the Canada Energy Regulator is using the PyPSA framework for systems analysis.
== Workshops ==
The Open Energy Modelling Initiative participants take turns to host regular academic workshops.
The Open Energy Modelling Initiative also holds occasional specialist meetings.
== See also ==
Crowdsourcing
Energy modeling
Energy system – the interpretation of the energy sector in system terms
Free Software Foundation Europe – a non-profit organization advocating for free software in Europe
Open data
Open energy system models – a review of energy system models that are also open source
Open energy system databases – database projects which collect, clean, and republish energy-related datasets
== Notes ==
== Further reading ==
Generation R open science blog on the openmod community
Introductory video on open energy system modeling using the python language as an example
Introductory video on the Open Energy Outlook (OEO) project specific to the United States
== External links ==
Related to openmod
Open Energy Modelling Initiative website
Open Energy Modelling Initiative wiki
Open Energy Modelling Initiative discussion forum
Open Energy Modelling Initiative email list archive
Open Energy Modelling Initiative YouTube channel
Open Energy Modelling Initiative GitHub account
Open Energy Modelling Initiative twitter feed
Open Energy Modelling Initiative manifesto written in 2014
Open energy data
Open Energy Platform – a collaborative versioned database for storing open energy system model datasets
Enipedia – a semantic wiki-site and database covering energy systems data worldwide
Energypedia – a wiki-based collaborative knowledge exchange covering sustainable energy topics in developing countries
Open Power System Data project – triggered by the work of the Open Energy Modelling Initiative
OpenEI – a US-based open energy data portal
Similar initiatives
soundsoftware.ac.uk – an open modelling community for acoustic and music software
Other
REEEM – a scientific project modeling sustainable energy futures for Europe
EERAdata – a project exploring FAIR energy data for Europe
== References == | Wikipedia/Open_Energy_Modelling_Initiative |
Prospective Outlook on Long-term Energy Systems (POLES) is a world simulation model for the energy sector that runs on the Vensim software. It is a techno-economic model with endogenous projection of energy prices, a complete accounting of energy demand and supply of numerous energy vectors and associated technologies, and a carbon dioxide and other greenhouse gases emissions module.
== History ==
POLES was initially developed in the early 1990s in the Institute of Energy Policy and Economics IEPE (now EDDEN-CNRS) in Grenoble, France. It was conceived on the basis of research issues related to global energy supply and climate change and the long-term impact of energy policies. It was initially developed through a detailed description of sectoral energy demand, electricity capacity planning and fossil fuel exploration and production in the different world regions. Along its development it incorporated theoretical and practical expertise in many fields such as mathematics, economics, engineering, energy analysis, international trade and technical change.
The initial development of POLES was financed by the JOULE II and III programmes of the European Commission’s Third and Fourth Framework Programmes (FP) for Research and Technological Development (1990-1994 and 1994-1998) as well as by the French CNRS. Since then, the model has been developed extensively through several projects, some partly financed by FP5, FP6 and FP7, and in collaboration between the EDDEN-CNRS, the consulting company Enerdata and the European Joint Research Centre IPTS.
With a history spanning twenty years, it is one of the few energy models worldwide that benefits from a continuous development process and expertise over such an extended time period.
== Structure ==
The model provides a complete system for the simulation and economic analysis of the world’s energy sector up to 2050. POLES is a partial equilibrium model with a yearly recursive simulation process with a combination of price-induced behavioural equations and a cost- and performance-based system for a large number of energy or energy-related technologies. Contrary to several other energy sector models, international energy prices are endogenous. The main exogenous variables are the gross domestic product and population for each country or region.
The model’s structure corresponds to a system of interconnected modules and articulates three levels of analysis: international energy markets, regional energy balances, and national energy demand (which includes new technologies, electricity production, primary energy production systems and sectoral greenhouse gas emissions).
POLES breaks down the world into 66 regions, of which 54 correspond to countries (including the 28 countries of the European Union) and 12 correspond to countries aggregates; for each of these regions, a full energy balance is modelled. The model covers 15 energy demand sectors in each region.
=== Demand sectors ===
Each demand sector is described with a high degree of detail, including activity indicators, short- and long-term energy prices and associated elasticities and technological evolution trends (thus including the dynamic cumulative processes associated with technological learning curves). This allows a strong economic consistency in the adjustment of supply and demand by region, as relative price changes at a sectoral level impact all key component of a region’s sector. Sectoral value added is simulated.
Energy demand for each fuel in a sector follows a market share-based competition driven by energy prices and factors related to policy or development assumptions.
The model is composed of the following demand sectors:
Residential and Tertiary: two sectors.
Industry:
Energy uses in industry: four sectors, allowing for a detailed modelling of such energy-intensive industries such as the steel industry, the chemicals industry and the non-metallic minerals industry (cement, glass).
Non-energy uses in industry: two sectors, for the transformation sectors such as plastics production and chemical feedstock production.
Transport: four sectors (air, rail, road and other). Road transport modelling comprises several vehicle types (passenger cars, merchandise heavy trucks) and allows the study of inter-technology competition with the penetration of alternative vehicles (hybrids, electric or fuel cell vehicles).
International bunkers: two sectors.
Agriculture: one sector.
=== Oil and gas supply ===
There are 88 oil and gas production regions with inter-regional trade; these producing regions supply the international energy markets, which in turn feed the demand of the 66 aforementioned world regions. Fossil fuel supply modelisation includes a technological improvement in the oil recovery rate, a linkage between new discoveries and cumulative drilling and a feedback of the reserves/production ratio on the oil price. OPEC and non-OPEC production is differentiated. The model includes non-conventional oil resources such as oil shales and tar sands.
=== Power Generation ===
There are 30 electricity generation technologies, among which several technologies that are still marginal or planned, such as thermal production with carbon capture and storage or new nuclear designs. Price-induced diffusion tools such as feed-in tariffs can be included as drivers for projecting the future development of new energy technologies.
The model distinguishes four typical daily load curves in a year, with two-hour steps. The load curves are met by a generation mix given by a merit order that is based on marginal costs of operation, maintenance and annualized capital costs. Expected power demand over the year influences investment decisions for new capacity planning in the next step.
=== Emissions and carbon price ===
The model includes accounting of greenhouse gas (GHG) emissions and allows visualising GHG flows on sectoral, regional and global levels. POLES covers fuel combustion-related emissions in all demand sectors, thus covering over half of global GHG emissions. The six Kyoto Protocol GHGs are covered (carbon dioxide, methane, nitrous oxide, sulphur hexafluoride, hydrofluorocarbons and perfluorocarbons).
The model can be used to test the sensibility of the energy sector to the carbon price as applied to the price of fossil fuels on a regional level, as envisaged or experimented by cap and trade systems like the EU’s Emissions Trading Scheme.
=== Databases ===
The model’s databases have been developed by IPTS, EDDEN and Enerdata. Data on technological costs and performances were provided by the TECHPOL database. The data for historical energy demand, consumption and prices are compiled and provided by Enerdata.
== Uses ==
The POLES model can be used to study or test the effect of different energy resources assumptions or energy policies and assess the importance of various driving variables behind energy demand and the penetration rates of certain electricity generation or end-use technologies. POLES does not directly provide the macro-economic impact of mitigation solutions as envisaged by the Stern Review, however it allows a detailed assessment of the costs associated with the development of low- or zero-carbon technologies.
Linked with GHG emissions profiles, the model can produce marginal abatement cost curves (MACCs) for each region and sector at a desired time; these can be used to quantify the costs related to GHG emissions reduction or as an analysis tool for strategic areas for emissions control policies and emissions trading systems under different market configurations and trading rules.
Studies including POLES simulations have been commissioned by international bodies such as several Directorates-General of the European Commission, national energy, environment, industry and transport agencies or private actors in the energy sector.
== Criticism ==
POLES can model changes in sectoral value added and shifts of activity between sectors. However POLES is not a macroeconomic model in the sense that it uses the gross domestic product as an input and includes no feedback on it that could result from the evolution of the energy system: carbon pricing, falling oil production and its effect on transport and mobility, or growth induced by technological innovation (such as the IT boom of the 1990s). As such, it does not provide the total impact on society of, e.g., climate adaptation or mitigation (it does however quantify the total cost to the energy sector, including investment necessary in the development of low-carbon technologies).
The model does not cover all greenhouse gases emissions, notably those related to agriculture (in part), land use, land-use change and forestry. As such, the climate component of the model does not allow to fully project GHG stocks, concentrations and associated temperature rises from anthropogenic climate change.
== See also ==
Energy economics
Energy modeling
Energy policy
UNFCCC
== External links ==
Enerdata
LEPII-EPE
JRC IPTS
== References == | Wikipedia/Prospective_Outlook_on_Long-term_Energy_Systems |
The reciprocating motion of a non-offset piston connected to a rotating crank through a connecting rod (as would be found in internal combustion engines) can be expressed by equations of motion. This article shows how these equations of motion can be derived using calculus as functions of angle (angle domain) and of time (time domain).
== Crankshaft geometry ==
The geometry of the system consisting of the piston, rod and crank is represented as shown in the following diagram:
=== Definitions ===
From the geometry shown in the diagram above, the following variables are defined:
l
{\displaystyle l}
rod length (distance between piston pin and crank pin)
r
{\displaystyle r}
crank radius (distance between crank center and crank pin, i.e. half stroke)
A
{\displaystyle A}
crank angle (from cylinder bore centerline at TDC)
x
{\displaystyle x}
piston pin position (distance upward from crank center along cylinder bore centerline)
The following variables are also defined:
v
{\displaystyle v}
piston pin velocity (upward from crank center along cylinder bore centerline)
a
{\displaystyle a}
piston pin acceleration (upward from crank center along cylinder bore centerline)
ω
{\displaystyle \omega }
crank angular velocity (in the same direction/sense as crank angle
A
{\displaystyle A}
)
=== Angular velocity ===
The frequency (Hz) of the crankshaft's rotation is related to the engine's speed (revolutions per minute) as follows:
ν
=
R
P
M
60
{\displaystyle \nu ={\frac {\mathrm {RPM} }{60}}}
So the angular velocity (radians/s) of the crankshaft is:
ω
=
2
π
⋅
ν
=
2
π
⋅
R
P
M
60
{\displaystyle \omega =2\pi \cdot \nu =2\pi \cdot {\frac {\mathrm {RPM} }{60}}}
=== Triangle relation ===
As shown in the diagram, the crank pin, crank center and piston pin form triangle NOP.
By the cosine law it is seen that:
l
2
=
r
2
+
x
2
−
2
⋅
r
⋅
x
⋅
cos
A
{\displaystyle l^{2}=r^{2}+x^{2}-2\cdot r\cdot x\cdot \cos A}
where
l
{\displaystyle l}
and
r
{\displaystyle r}
are constant and
x
{\displaystyle x}
varies as
A
{\displaystyle A}
changes.
== Equations with respect to angular position (angle domain) ==
Angle domain equations are expressed as functions of angle.
=== Deriving angle domain equations ===
The angle domain equations of the piston's reciprocating motion are derived from the system's geometry equations as follows.
=== Position (geometry) ===
Position with respect to crank angle (from the triangle relation, completing the square, utilizing the Pythagorean identity, and rearranging):
l
2
=
r
2
+
x
2
−
2
⋅
r
⋅
x
⋅
cos
A
l
2
−
r
2
=
(
x
−
r
⋅
cos
A
)
2
−
r
2
⋅
cos
2
A
l
2
−
r
2
+
r
2
⋅
cos
2
A
=
(
x
−
r
⋅
cos
A
)
2
l
2
−
r
2
⋅
(
1
−
cos
2
A
)
=
(
x
−
r
⋅
cos
A
)
2
l
2
−
r
2
⋅
sin
2
A
=
(
x
−
r
⋅
cos
A
)
2
x
=
r
⋅
cos
A
+
l
2
−
r
2
⋅
sin
2
A
{\displaystyle {\begin{array}{lcl}l^{2}=r^{2}+x^{2}-2\cdot r\cdot x\cdot \cos A\\l^{2}-r^{2}=(x-r\cdot \cos A)^{2}-r^{2}\cdot \cos ^{2}A\\l^{2}-r^{2}+r^{2}\cdot \cos ^{2}A=(x-r\cdot \cos A)^{2}\\l^{2}-r^{2}\cdot (1-\cos ^{2}A)=(x-r\cdot \cos A)^{2}\\l^{2}-r^{2}\cdot \sin ^{2}A=(x-r\cdot \cos A)^{2}\\x=r\cdot \cos A+{\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}\\\end{array}}}
=== Velocity ===
Velocity with respect to crank angle (take first derivative, using the chain rule):
x
′
=
d
x
d
A
=
−
r
⋅
sin
A
+
(
1
2
)
⋅
(
−
2
)
⋅
r
2
⋅
sin
A
⋅
cos
A
l
2
−
r
2
⋅
sin
2
A
=
−
r
⋅
sin
A
−
r
2
⋅
sin
A
⋅
cos
A
l
2
−
r
2
⋅
sin
2
A
{\displaystyle {\begin{array}{lcl}x'&=&{\frac {dx}{dA}}\\&=&-r\cdot \sin A+{\frac {({\frac {1}{2}})\cdot (-2)\cdot r^{2}\cdot \sin A\cdot \cos A}{\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}}\\&=&-r\cdot \sin A-{\frac {r^{2}\cdot \sin A\cdot \cos A}{\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}}\\\end{array}}}
=== Acceleration ===
Acceleration with respect to crank angle (take second derivative, using the chain rule and the quotient rule):
x
″
=
d
2
x
d
A
2
=
−
r
⋅
cos
A
−
r
2
⋅
cos
2
A
l
2
−
r
2
⋅
sin
2
A
−
−
r
2
⋅
sin
2
A
l
2
−
r
2
⋅
sin
2
A
−
r
2
⋅
sin
A
⋅
cos
A
⋅
(
−
1
2
)
⋅
(
−
2
)
⋅
r
2
⋅
sin
A
⋅
cos
A
(
l
2
−
r
2
⋅
sin
2
A
)
3
=
−
r
⋅
cos
A
−
r
2
⋅
(
cos
2
A
−
sin
2
A
)
l
2
−
r
2
⋅
sin
2
A
−
r
4
⋅
sin
2
A
⋅
cos
2
A
(
l
2
−
r
2
⋅
sin
2
A
)
3
{\displaystyle {\begin{array}{lcl}x''&=&{\frac {d^{2}x}{dA^{2}}}\\&=&-r\cdot \cos A-{\frac {r^{2}\cdot \cos ^{2}A}{\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}}-{\frac {-r^{2}\cdot \sin ^{2}A}{\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}}-{\frac {r^{2}\cdot \sin A\cdot \cos A\cdot \left(-{\frac {1}{2}}\right)\cdot (-2)\cdot r^{2}\cdot \sin A\cdot \cos A}{\left({\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}\right)^{3}}}\\&=&-r\cdot \cos A-{\frac {r^{2}\cdot \left(\cos ^{2}A-\sin ^{2}A\right)}{\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}}-{\frac {r^{4}\cdot \sin ^{2}A\cdot \cos ^{2}A}{\left({\sqrt {l^{2}-r^{2}\cdot \sin ^{2}A}}\right)^{3}}}\\\end{array}}}
=== Non Simple Harmonic Motion ===
The angle domain equations above show that the motion of the piston (connected to rod and crank) is not simple harmonic motion, but is modified by the motion of the rod as it swings with the rotation of the crank. This is in contrast to the Scotch Yoke which directly produces simple harmonic motion.
=== Example graphs ===
Example graphs of the angle domain equations are shown below.
== Equations with respect to time (time domain) ==
Time domain equations are expressed as functions of time.
=== Angular velocity derivatives ===
Angle is related to time by angular velocity
ω
{\displaystyle \omega }
as follows:
A
=
ω
t
{\displaystyle A=\omega t\,}
If angular velocity
ω
{\displaystyle \omega }
is constant, then:
d
A
d
t
=
ω
{\displaystyle {\frac {dA}{dt}}=\omega }
and:
d
2
A
d
t
2
=
0
{\displaystyle {\frac {d^{2}A}{dt^{2}}}=0}
=== Deriving time domain equations ===
The time domain equations of the piston's reciprocating motion are derived from the angle domain equations as follows.
=== Position ===
Position with respect to time is simply:
x
{\displaystyle x\,}
=== Velocity ===
Velocity with respect to time (using the chain rule):
v
=
d
x
d
t
=
d
x
d
A
⋅
d
A
d
t
=
d
x
d
A
⋅
ω
=
x
′
⋅
ω
{\displaystyle {\begin{array}{lcl}v&=&{\frac {dx}{dt}}\\&=&{\frac {dx}{dA}}\cdot {\frac {dA}{dt}}\\&=&{\frac {dx}{dA}}\cdot \ \omega \\&=&x'\cdot \omega \\\end{array}}}
=== Acceleration ===
Acceleration with respect to time (using the chain rule and product rule, and the angular velocity derivatives):
a
=
d
2
x
d
t
2
=
d
d
t
d
x
d
t
=
d
d
t
(
d
x
d
A
⋅
d
A
d
t
)
=
d
d
t
(
d
x
d
A
)
⋅
d
A
d
t
+
d
x
d
A
⋅
d
d
t
(
d
A
d
t
)
=
d
d
A
(
d
x
d
A
)
⋅
(
d
A
d
t
)
2
+
d
x
d
A
⋅
d
2
A
d
t
2
=
d
2
x
d
A
2
⋅
(
d
A
d
t
)
2
+
d
x
d
A
⋅
d
2
A
d
t
2
=
d
2
x
d
A
2
⋅
ω
2
+
d
x
d
A
⋅
0
=
x
″
⋅
ω
2
{\displaystyle {\begin{array}{lcl}a&=&{\frac {d^{2}x}{dt^{2}}}\\&=&{\frac {d}{dt}}{\frac {dx}{dt}}\\&=&{\frac {d}{dt}}({\frac {dx}{dA}}\cdot {\frac {dA}{dt}})\\&=&{\frac {d}{dt}}({\frac {dx}{dA}})\cdot {\frac {dA}{dt}}+{\frac {dx}{dA}}\cdot {\frac {d}{dt}}({\frac {dA}{dt}})\\&=&{\frac {d}{dA}}({\frac {dx}{dA}})\cdot ({\frac {dA}{dt}})^{2}+{\frac {dx}{dA}}\cdot {\frac {d^{2}A}{dt^{2}}}\\&=&{\frac {d^{2}x}{dA^{2}}}\cdot ({\frac {dA}{dt}})^{2}+{\frac {dx}{dA}}\cdot {\frac {d^{2}A}{dt^{2}}}\\&=&{\frac {d^{2}x}{dA^{2}}}\cdot \omega ^{2}+{\frac {dx}{dA}}\cdot 0\\&=&x''\cdot \omega ^{2}\\\end{array}}}
=== Scaling for angular velocity ===
From the foregoing, you can see that the time domain equations are simply scaled forms of the angle domain equations:
x
{\displaystyle x}
is unscaled,
x
′
{\displaystyle x'}
is scaled by ω, and
x
″
{\displaystyle x''}
is scaled by ω².
To convert the angle domain equations to time domain, first replace A with ωt, and then scale for angular velocity as follows: multiply
x
′
{\displaystyle x'}
by ω, and multiply
x
″
{\displaystyle x''}
by ω².
== Velocity maxima and minima ==
By definition, the velocity maxima and minima
occur at the acceleration zeros (crossings of the horizontal axis).
=== Crank angle not right-angled ===
The velocity maxima and minima (see the acceleration zero crossings in the graphs below) depend on rod length
l
{\displaystyle l}
and half stroke
r
{\displaystyle r}
and do not occur when the crank angle
A
{\displaystyle A}
is right angled.
=== Crank-rod angle not right angled ===
The velocity maxima and minima do not necessarily occur when the crank makes a right angle with the rod. Counter-examples exist to disprove the statement "velocity maxima and minima only occur when the crank-rod angle is right angled".
==== Example ====
For rod length 6" and crank radius 2" (as shown in the example graph below), numerically solving the acceleration zero-crossings finds the velocity maxima/minima to be at crank angles of ±73.17530°. Then, using the triangle law of sines, it is found that the rod-vertical angle is 18.60639° and the crank-rod angle is 88.21832°. Clearly, in this example, the angle between the crank and the rod is not a right angle. Summing the angles of the triangle 88.21832° + 18.60639° + 73.17530° gives 180.00000°. A single counter-example is sufficient to disprove the statement "velocity maxima/minima occur when crank makes a right angle with rod".
== Example graphs of piston motion ==
=== Angle Domain Graphs ===
The graphs below show the angle domain equations for a constant rod length
l
{\displaystyle l}
(6.0") and various values of half stroke
r
{\displaystyle r}
(1.8", 2.0", 2.2").
Note in the graphs that L is rod length
l
{\displaystyle l}
and R is half stroke
r
{\displaystyle r}
.
=== Animation ===
Below is an animation of the piston motion equations with the same values of rod length and crank radius as in the graphs above.
==== Units of Convenience ====
Note that for the automotive/hotrod use-case the most convenient (used by enthusiasts) unit of length for the piston-rod-crank geometry is the inch, with typical dimensions being 6" (inch) rod length and 2" (inch) crank radius. This article uses units of inch (") for position, velocity and acceleration, as shown in the graphs above.
== See also ==
== References ==
Heywood, John Benjamin (1988). Internal Combustion Engine Fundamentals (1st ed.). McGraw Hill. ISBN 978-0070286375.
Taylor, Charles Fayette (1985). The Internal Combustion Engine in Theory and Practice, Vol 1 & 2 (2nd ed.). MIT Press. ISBN 978-0262700269.
"Piston Motion Basics @ epi-eng.com".
== External links ==
animated engines Animated Otto Engine
desmos Interactive Stroke vs Rod Piston Position and Derivatives
desmos Interactive Crank Animation
codecogs Piston Velocity and Acceleration
youtube Rotating SBC 350 Engine
youtube 3D Animation of V8 Engine
youtube Inside V8 Engine | Wikipedia/Piston_motion_equations |
An energy market is a type of commodity market on which electricity, heat, and fuel products are traded. Natural gas and electricity are examples of products traded on an energy market. Other energy commodities include: oil, coal, carbon emissions (greenhouse gases), nuclear power, solar energy and wind energy. Due to the difficulty in storing and transporting energy, current and future prices in energy are rarely linked. This is because energy purchased at a current price is difficult (or impossible) to store and then sell at a later date. There are two types of market schemes (for pricing): spot market and forward market.
Typically, energy development stems from a government's energy policy which encourages the development of an energy industry specifically in a competitive manner (as opposed to non competitive).
Until the 1970s when energy markets underwent dramatic changes, such markets were characterized by monopoly-based organizational structures. For instance, most of the world's petroleum reserves were controlled by the Seven Sisters. In the case of petroleum energy trade, circumstances then changed considerably in 1973 as the influence of OPEC grew and the repercussions of the 1973 oil crisis affected global energy markets.
== Liberalization and regulation ==
Energy markets have been liberalized in some countries. They are regulated by national and international authorities (including liberalized markets) to protect consumer rights and to avoid oligopolies. Some such regulators include: the Australian Energy Market Commission in Australia, the Energy Market Authority in Singapore, the Energy Community in Europe (which replaced the South-East Europe Regional Energy Market) and the Nordic energy market for Nordic countries. Members of the European Union are required to liberalize their energy markets.
Regulators tend to seek to discourage price volatility, to reform markets (if needed) and to both search for evidence of- and enforce compliance against anti-competitive behavior (such as the formation of an illegal monopoly).
Due to the increase in oil price since 2003 coupled with increased market speculation, energy markets have been reviewed; and, by 2008, several conferences were organized to address the energy market sentiments of petroleum importing nations. In Russia, the markets are being reformed by the introduction of harmonized and all-Russian consumer prices.
== Current and past energy usage in the United States ==
The United States currently uses over four trillion kilowatt-hours (kWh) per year in order to fulfill its energy needs. Data given by the United States Energy Information Administration (EIA) has shown a steady growth in energy usage dating back to 1990, at which time the country consumed around 3 trillion kWh of energy. Traditionally, the United States's energy sources have included oil, coal, nuclear, renewables and natural gas. The breakdown of each of these fuels as a percentage of the overall consumption in the year 1993, per EIA was: coal at 53%, nuclear energy at 19%, natural gas at 13%, renewable energy at 11% and oil at 4% of the overall energy needs. In 2011, the breakdown was: coal at 42%, nuclear at 19%, natural gas at 25%, renewable energy at 13% and oil accounted for 1%. These figures show a drop in energy derived from coal and a significant increase in both natural gas and renewable energy sources.
According to the United States Geological Survey (USGS) data from 2006, hydroelectric power accounted for most of the renewable energy production in the United States. However, increasing government funding, grants, and other incentives have been drawing many companies towards the biofuel, wind and solar energy production industries.
== Moving towards renewable energy ==
In recent years, there has been a movement towards renewable and sustainable energy in the United States. This has been caused by many factors, including consequences of climate change, affordability, government funding, tax incentives and potential profits in the energy market of the United States. According to the most recent projections by the EIA forecasting to the year 2040, the renewable energy industry will grow from providing 13% of the power in the year 2011 to 16% in 2040. This accounts for 32% of the overall growth during the same time period. This increase could be profitable for companies that expand into the renewable energy market in the United States.
This movement towards renewable energy has also been affected by the stability of the global market. Recent economic instability in countries in the Middle East and elsewhere has driven American companies to further develop American dependence on foreign sources of energy, such as oil. The long term projections by the EIA for renewable energy capacity in the United States is also sensitive to factors such as the cost and availability of domestic oil and natural gas production.
Countries around the world also face the challenge of up-skilling professionals in order to create the workforce required for the transition from fossil fuel to renewable energy. Organisations such as the Renewable Energy Institute are assisting with this transition, but more is required to meet targets set by governments around the world, including those set by the Paris Agreement.
== Renewable energy sources ==
Currently, the majority of the United States's renewable energy production comes from hydroelectric power, solar power and wind power. According to the U.S. Department of Energy, the cost of wind power doubled between the years of 2002 to 2008. However, since then, the prices of wind power have declined by 33%. Various factors have contributed to the decline in the cost of wind power, such as government subsidies, tax breaks, technological advancement and the cost of oil and natural gas.
Hydroelectric power has been the main source of renewable energy because it has been reliable over time. Nonetheless, there are challenges in hydropower. For example, traditional hydroelectric power required damming rivers and other sources of water. Damming disrupts the environment in and near the water, proximally because the dam necessarily creates a lake at the water source. Other complications may include protest by environmentalists. However, new forms of hydroelectric power that harness wave energy from oceans have been in development in recent years. Although these power sources need further development to become economically viable, they have potential to become significant sources of energy.
In recent years, wind energy and solar energy have made the largest steps towards significant energy production in the United States. These sources have little impact on the environment and have the highest potential of renewable energy sources used today. Advances in technology, government tax rebates, subsidies, grants, and economic need have all lead to huge steps towards the usage of sustainable wind and solar energy today.
== In the U.S. ==
The energy industry is the third-largest industry in the United States. This market is expected to have an investment of over $700 billion over the next two decades according to selectusa. Furthermore, there are many federal resources enticing both domestic and foreign companies to develop the industry in the United States. These federal resources include the Department of Energy Loan Guarantee, the American Reinvestment and Recovery Act, the Smart Grid Stimulus Program, as well as an Executive Order on Industrial Energy Efficiency. Harnessing the power of wind, solar and hydroelectric resources in the United States will become the focus of the United States's renewable sources of energy.
== See also ==
Commodity value
Cost competitiveness of fuel sources
Demand destruction
Energy crisis
Energy derivative
Energy intensity
Food vs. fuel
Renewable energy commercialization
Cost of electricity by source
Spark spread
== References ==
=== External links ===
UK Energy Wholesale Market Review - Weekly Analysis - Baseload electricity, Peak electricity, Seasonal power prices, Commodity price movements and Wholesale price snapshot | Wikipedia/Energy_market |
Jay Wright Forrester (July 14, 1918 – November 16, 2016) was an American computer engineer, management theorist and systems scientist. He spent his entire career at Massachusetts Institute of Technology, entering as a graduate student in 1939, and eventually retiring in 1989.
During World War II Forrester worked on servomechanisms as a research assistant to Gordon S. Brown. After the war he headed MIT's Whirlwind digital computer project. There he is credited as a co-inventor of magnetic core memory, the predominant form of random-access computer memory during the most explosive years of digital computer development (between 1955 and 1975). It was part of a family of related technologies which bridged the gap between vacuum tubes and semiconductors by exploiting the magnetic properties of materials to perform switching and amplification. His team is also believed to have created the first animation in the history of computer graphics, a "jumping ball" on an oscilloscope.
Later, Forrester was a professor at the MIT Sloan School of Management, where he introduced the Forrester effect describing fluctuations in supply chains. He has been credited as a founder of system dynamics, which deals with the simulation of interactions between objects in dynamic systems. After his initial efforts in industrial simulation, Forrester attempted to simulate urban dynamics and then world dynamics, developing a model with the Club of Rome along the lines of the model popularized in The Limits to Growth. Today system dynamics is most often applied to research and consulting in organizations and other social systems.
== Early life and education ==
Forrester was born on a farm near Anselmo, Nebraska, where "his early interest in electricity was spurred, perhaps, by the fact that the ranch had none. While in high school, he built a wind-driven, 12-volt electrical system using old car parts—it gave the ranch its first electric power."
Forrester received his Bachelor of Science in Electrical Engineering in 1939 from the University of Nebraska–Lincoln. He went on to graduate school at the Massachusetts Institute of Technology, where he worked with servomechanism pioneer Gordon S. Brown and gained his master's in 1945 with a thesis on 'Hydraulic Servomechanism Developments'. In 1949 he was inducted into Eta Kappa Nu the Electrical & Computer Engineering Honor Society.
== Career ==
=== Whirlwind projects ===
During the late 1940s and early 50s, Forrester continued research in electrical and computer engineering at MIT, heading the Whirlwind project. Trying to design an aircraft simulator, the group moved away from an initial analog design to develop a digital computer. As a key part of this design, Forrester perfected and patented multi-dimensional addressable magnetic-core memory, the forerunner of today's RAM. In 1948-49 the Whirlwind team created the first animation in the history of computer graphics, a "jumping ball" on an oscilloscope. Whirlwind began operation in 1951, the first digital computer to operate in real time and to use video displays for output. It subsequently evolved into the air defence system Semi-Automatic Ground Environment (SAGE).
=== DEC board member ===
Forrester was invited to join the board of Digital Equipment Corporation by Ken Olsen in 1957, and advised the early company on management science. He left before 1966 due to changes in DEC to a product line led organisation.
=== Forrester effect ===
In 1956, Forrester moved to the MIT Sloan School of Management as Germeshausen professor. After his retirement, he continued until 1989 as Professor Emeritus and Senior Lecturer. In 1961 he published his seminal book, Industrial Dynamics, the first work in the field of System Dynamics. The work resulted from analyzing the operations of Sprague Electric in Massachusetts. The study was the first model of supply chains, showing in this case that inventory fluctuations were not due to external factors as thought, but rather to internal corporate dynamics that his continuous modelling approach could detect. The phenomenon, originally called the Forrester effect, is today more frequently described as the "bullwhip effect".
=== System dynamics ===
Forrester was the founder of system dynamics, which deals with the simulation of interactions between objects in dynamic systems. Industrial Dynamics was the first book Forrester wrote using system dynamics to analyze industrial business cycles. Several years later, interactions with former Boston Mayor John F. Collins led Forrester to write Urban Dynamics, which sparked an ongoing debate on the feasibility of modeling broader social problems. The book went on to influence the video game SimCity.
Forrester's 1971 paper 'Counterintuitive Behavior of Social Systems' argued that the use of computerized system models to inform social policy was superior to simple debate, both in generating insight into the root causes of problems and in understanding the likely effects of proposed solutions. He characterized normal debate and discussion as being dominated by inexact mental models:
The mental model is fuzzy. It is incomplete. It is imprecisely stated. Furthermore, within one individual, a mental model changes with time and even during the flow of a single conversation. The human mind assembles a few relationships to fit the context of a discussion. As the subject shifts so does the model. When only a single topic is being discussed, each participant in a conversation employs a different mental model to interpret the subject. Fundamental assumptions differ but are never brought into the open. Goals are different and are left unstated. It is little wonder that compromise takes so long. And it is not surprising that consensus leads to laws and programs that fail in their objectives or produce new difficulties greater than those that have been relieved.
The paper summarized the results of a previous study on the system dynamics governing the economies of urban centers, which showed "how industry, housing, and people interact with each other as a city grows and decays." The study's findings, presented more fully in Forrester's 1969 book Urban Dynamics, suggested that the root cause of depressed economic conditions was a shortage of job opportunities relative to the population level, and that the most popular solutions proposed at the time (e.g. increasing low-income housing availability, or reducing real estate taxes) counter-intuitively would worsen the situation by increasing this relative shortage. The paper further argued that measures to reduce the shortage—such as converting land use from housing to industry, or increasing real estate taxes to spur property redevelopment—would be similarly counter-effective.
=== Club of Rome ===
'Counterintuitive Behavior of Social Systems' also sketched a model of world dynamics that correlated population, food production, industrial development, pollution, availability of natural resources, and quality of life, and attempted future projections of those values under various assumptions. Forrester presented this model more fully in his 1971 book World Dynamics, notable for serving as the initial basis for the World3 model used by Donella and Dennis Meadows in their popular 1972 book The Limits to Growth.
Forrester met Aurelio Peccei, a founder of the Club of Rome in 1970. He later met with the Club of Rome to discuss issues surrounding global sustainability; the book World Dynamics followed. World Dynamics took on modeling the complex interactions of the world economy, population and ecology, which was controversial (see also Donella Meadows and The Limits to Growth). It was the start of the field of global modeling. Forrester continued working in applications of system dynamics and promoting its use in education.
== Awards ==
In 1972, Forrester received the IEEE Medal of Honor, IEEEs highest award.
In 1982, he received the IEEE Computer Pioneer Award. In 1995, he was made a Fellow of the Computer History Museum "for his perfecting of core memory technology into a practical computer memory device; for fundamental contributions to early computer systems design and development". In 2006, he was inducted into the Operational Research Hall of Fame.
== Publications ==
Forrester wrote several books, including:
Forrester, Jay W. (1961). Industrial Dynamics. M.I.T. Press.
1968. Principles of Systems, 2nd ed. Pegasus Communications.
1969. Urban Dynamics. Pegasus Communications.
1971. World Dynamics. Wright-Allen Press.
1975. Collected Papers of Jay W. Forrester. Pegasus Communications.
His articles and papers include:
1958. 'Industrial Dynamics – A Major Breakthrough for Decision Makers', Harvard Business Review, Vol. 36, No. 4, pp. 37–66.
1968, 'Market Growth as Influenced by Capital Investment', Industrial Management Review, Vol. IX, No. 2, Winter 1968.
1971, 'Counterintuitive Behavior of Social Systems', Theory and Decision, Vol. 2, December 1971, pp. 109–140. Also available online.
1989, 'The Beginning of System Dynamics'. Banquet Talk at the international meeting of the System Dynamics Society, Stuttgart, Germany, July 13, 1989. MIT System Dynamics Group Memo D.
1992, 'System Dynamics and Learner-Centered-Learning in Kindergarten through 12th Grade Education.'
1993, 'System Dynamics and the Lessons of 35 Years', in Kenyon B. Greene (ed.) A Systems-Based Approach to Policymaking, New York: Springer, pp. 199–240.
1996, 'System Dynamics and K–12 Teachers: a lecture at the University of Virginia School of Education'.
1998, 'Designing the Future'. Lecture at Universidad de Sevilla, December 15, 1998.
1999, 'System Dynamics: the Foundation Under Systems Thinking'. Cambridge, MA: Sloan School of Management.
2016, 'Learning through System Dynamics as preparation for the 21st Century', System Dynamics Review, Vol. 32, pp. 187–203.
== See also ==
DYNAMO (programming language)
Roger Sisson
== References ==
== External links ==
Selected papers by Forrester.
Jay Wright Forrester at the Mathematics Genealogy Project
Biography of Jay W. Forrester from the Institute for Operations Research and the Management Sciences
"The many careers of Jay Forrester," MIT Technology Review, June 23, 2015
Jay Wright Forrester Papers, MC 439, box X. Massachusetts Institute of Technology, Institute Archives and Special Collections, Cambridge, Massachusetts.
J. W. Forrester and the History of System Dynamics | Wikipedia/World_Dynamics |
The MoSCoW method is a prioritization technique. It is used in software development, management, business analysis, and project management to reach a common understanding with stakeholders on the importance they place on the delivery of each requirement; it is also known as MoSCoW prioritization or MoSCoW analysis.
The term MOSCOW itself is an acronym derived from the first letter of each of four prioritization categories:
M - Must have,
S - Should have,
C - Could have,
W - Won’t have.
The interstitial Os are added to make the word pronounceable. While the Os are usually in lower-case to indicate that they do not stand for anything, the all-capitals MOSCOW is also used.
== Background ==
This prioritization method was developed by Dai Clegg in 1994 for use in rapid application development (RAD). It was first used extensively with the dynamic systems development method (DSDM) from 2002.
MoSCoW is often used with timeboxing, where a deadline is fixed so that the focus must be on the most important requirements, and is commonly used in agile software development approaches such as Scrum, rapid application development (RAD), and DSDM.
== Prioritization of requirements ==
All requirements are important, however to deliver the greatest and most immediate business benefits early the requirements must be prioritized. Developers will initially try to deliver all the Must have, Should have and Could have requirements but the Should and Could requirements will be the first to be removed if the delivery timescale looks threatened.
The plain English meaning of the prioritization categories has value in getting customers to better understand the impact of setting a priority, compared to alternatives like High, Medium and Low.
The categories are typically understood as:
Must have
Requirements labelled as Must have are critical to the current delivery timebox in order for it to be a success. If even one Must have requirement is not included, the project delivery should be considered a failure (note: requirements can be downgraded from Must have, by agreement with all relevant stakeholders; for example, when new requirements are deemed more important). MUST can also be considered an acronym for the Minimum Usable Subset.
Should have
Requirements labelled as Should have are important but not necessary for delivery in the current delivery timebox. While Should have requirements can be as important as Must have, they are often not as time-critical or there may be another way to satisfy the requirement so that it can be held back until a future delivery timebox.
Could have
Requirements labelled as Could have are desirable but not necessary and could improve the user experience or customer satisfaction for a little development cost. These will typically be included if time and resources permit.
Won't have (this time)
Requirements labelled as Won't have, have been agreed by stakeholders as the least-critical, lowest-payback items, or not appropriate at that time. As a result, Won't have requirements are not planned into the schedule for the next delivery timebox. Won't have requirements are either dropped or reconsidered for inclusion in a later timebox. (Note: occasionally the term Would like to have is used; however, that usage is incorrect, as this last priority is clearly stating something is outside the scope of delivery). (The BCS in edition 3 & 4 of the Business Analysis Book describe 'W' as 'Want to have but not this time around')
=== Variants ===
Sometimes W is used to mean wish (or would), i.e. still possible but unlikely to be included (and less likely than could). This is then distinguished from X for excluded for items which are explicitly not included. Would is used to indicate features that are not required now, but should be considered in architectural terms during the design as future expansion opportunities - this avoids the risk of dead-end designs that would inhibit a particular feature being offered in the future.
== Use in new product development ==
In new product development, particularly those following agile software development approaches, there is always more to do than there is time or funding to permit (hence the need for prioritization).
For example, should a team have too many potential epics (i.e., high-level stories) for the next release of their product, they could use the MoSCoW method to select which epics are Must have, which Should have, and so on; the minimum viable product (or MVP) would be all those epics marked as Must have. Oftentimes, a team will find that, even after identifying their MVP, they have too much work for their expected capacity. In such cases, the team could then use the MoSCoW method to select which features (or stories, if that is the subset of epics in their organisation) are Must have, Should have, and so on; the minimum marketable features (or MMF) would be all those marked as Must have. If there is sufficient capacity after selecting the MVP or MMF, the team could then plan to include Should have and even Could have items too.
== Criticism ==
Criticism of the MoSCoW method includes:
Does not help decide between multiple requirements within the same priority.
Lack of rationale around how to rank competing requirements: why something is must rather than should.
Ambiguity over timing, especially on the Won't have category: whether it is not in this release or not ever.
Potential for political focus on building new features over technical improvements (such as refactoring).
== Other methods ==
Other methods used for product prioritization include:
Kano model prioritization method
== References ==
== External links ==
RFC 2119 (Requirement Levels) This RFC defines requirement levels to be used in formal documentation. It is commonly used in contracts and other legal documentation. Noted here as the wording is similar but not necessarily the meaning.
Buffered MoSCoW Rules This essay proposes the use of a modified set of MoSCoW rules that accomplish the objectives of prioritizing deliverables and providing a degree of assurance as a function of the uncertainty of the underlying estimates.
MoSCoW Prioritisation Steps and tips for prioritisation following the DSDM MoSCoW rules. | Wikipedia/MoSCoW_method |
A system is a group of interacting or interrelated elements that act according to a set of rules to form a unified whole. A system, surrounded and influenced by its environment, is described by its boundaries, structure and purpose and is expressed in its functioning. Systems are the subjects of study of systems theory and other systems sciences.
Systems have several common properties and characteristics, including structure, function(s), behavior and interconnectivity.
== Etymology ==
The term system comes from the Latin word systēma, in turn from Greek σύστημα systēma: "whole concept made of several parts or members, system", literary "composition".
== History ==
In the 19th century, the French physicist Nicolas Léonard Sadi Carnot, who studied thermodynamics, pioneered the development of the concept of a system in the natural sciences. In 1824, he studied the system which he called the working substance (typically a body of water vapor) in steam engines, in regard to the system's ability to do work when heat is applied to it. The working substance could be put in contact with either a boiler, a cold reservoir (a stream of cold water), or a piston (on which the working body could do work by pushing on it). In 1850, the German physicist Rudolf Clausius generalized this picture to include the concept of the surroundings and began to use the term working body when referring to the system.
The biologist Ludwig von Bertalanffy became one of the pioneers of the general systems theory. In 1945 he introduced models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relation or 'forces' between them.
In the late 1940s and mid-50s, Norbert Wiener and Ross Ashby pioneered the use of mathematics to study systems of control and communication, calling it cybernetics.
In the 1960s, Marshall McLuhan applied general systems theory in an approach that he called a field approach and figure/ground analysis, to the study of media theory.
In the 1980s, John Henry Holland, Murray Gell-Mann and others coined the term complex adaptive system at the interdisciplinary Santa Fe Institute.
== Concepts ==
=== Environment and boundaries ===
Systems theory views the world as a complex system of interconnected parts. One scopes a system by defining its boundary; this means choosing which entities are inside the system and which are outside—part of the environment. One can make simplified representations (models) of the system in order to understand it and to predict or impact its future behavior. These models may define the structure and behavior of the system.
=== Natural and human-made systems ===
There are natural and human-made (designed) systems. Natural systems may not have an apparent objective but their behavior can be interpreted as purposeful by an observer. Human-made systems are made with various purposes that are achieved by some action performed by or with the system. The parts of a system must be related; they must be "designed to work as a coherent entity"—otherwise they would be two or more distinct systems.
=== Theoretical framework ===
Most systems are open systems, exchanging matter and energy with their respective surroundings; like a car, a coffeemaker, or Earth. A closed system exchanges energy, but not matter, with its environment; like a computer or the project Biosphere 2. An isolated system exchanges neither matter nor energy with its environment. A theoretical example of such a system is the Universe.
=== Process and transformation process ===
An open system can also be viewed as a bounded transformation process, that is, a black box that is a process or collection of processes that transform inputs into outputs. Inputs are consumed; outputs are produced. The concept of input and output here is very broad. For example, an output of a passenger ship is the movement of people from departure to destination.
=== System model ===
A system comprises multiple views. Human-made systems may have such views as concept, analysis, design, implementation, deployment, structure, behavior, input data, and output data views. A system model is required to describe and represent all these views.
=== Systems architecture ===
A systems architecture, using one single integrated model for the description of multiple views, is a kind of system model.
=== Subsystem ===
A subsystem is a set of elements, which is a system itself, and a component of a larger system. The IBM Mainframe Job Entry Subsystem family (JES1, JES2, JES3, and their HASP/ASP predecessors) are examples. The main elements they have in common are the components that handle input, scheduling, spooling and output; they also have the ability to interact with local and remote operators.
A subsystem description is a system object that contains information defining the characteristics of an operating environment controlled by the system. The data tests are performed to verify the correctness of the individual subsystem configuration data (e.g. MA Length, Static Speed Profile, …) and they are related to a single subsystem in order to test its Specific Application (SA).
== Analysis ==
There are many kinds of systems that can be analyzed both quantitatively and qualitatively. For example, in an analysis of urban systems dynamics, A . W. Steiss defined five intersecting systems, including the physical subsystem and behavioral system. For sociological models influenced by systems theory, Kenneth D. Bailey defined systems in terms of conceptual, concrete, and abstract systems, either isolated, closed, or open. Walter F. Buckley defined systems in sociology in terms of mechanical, organic, and process models. Bela H. Banathy cautioned that for any inquiry into a system understanding its kind is crucial, and defined natural and designed, i. e. artificial, systems. For example, natural systems include subatomic systems, living systems, the Solar System, galaxies, and the Universe, while artificial systems include man-made physical structures, hybrids of natural and artificial systems, and conceptual knowledge. The human elements of organization and functions are emphasized with their relevant abstract systems and representations.
Artificial systems inherently have a major defect: they must be premised on one or more fundamental assumptions upon which additional knowledge is built. This is in strict alignment with Gödel's incompleteness theorems. The Artificial system can be defined as a "consistent formalized system which contains elementary arithmetic". These fundamental assumptions are not inherently deleterious, but they must by definition be assumed as true, and if they are actually false then the system is not as structurally integral as is assumed (i.e. it is evident that if the initial expression is false, then the artificial system is not a "consistent formalized system"). For example, in geometry this is very evident in the postulation of theorems and extrapolation of proofs from them.
George J. Klir maintained that no "classification is complete and perfect for all purposes", and defined systems as abstract, real, and conceptual physical systems, bounded and unbounded systems, discrete to continuous, pulse to hybrid systems, etc. The interactions between systems and their environments are categorized as relatively closed and open systems. Important distinctions have also been made between hard systems—–technical in nature and amenable to methods such as systems engineering, operations research, and quantitative systems analysis—and soft systems that involve people and organizations, commonly associated with concepts developed by Peter Checkland and Brian Wilson through soft systems methodology (SSM) involving methods such as action research and emphasis of participatory designs. Where hard systems might be identified as more scientific, the distinction between them is often elusive.
=== Economic system ===
An economic system is a social institution which deals with the production, distribution and consumption of goods and services in a particular society. The economic system is composed of people, institutions and their relationships to resources, such as the convention of property. It addresses the problems of economics, like the allocation and scarcity of resources.
The international sphere of interacting states is described and analyzed in systems terms by several international relations scholars, most notably in the neorealist school. This systems mode of international analysis has however been challenged by other schools of international relations thought, most notably the constructivist school, which argues that an over-large focus on systems and structures can obscure the role of individual agency in social interactions. Systems-based models of international relations also underlie the vision of the international sphere held by the liberal institutionalist school of thought, which places more emphasis on systems generated by rules and interaction governance, particularly economic governance.
=== Information and computer science ===
In computer science and information science, an information system is a hardware system, software system, or combination, which has components as its structure and observable inter-process communications as its behavior.
There are systems of counting, as with Roman numerals, and various systems for filing papers, or catalogs, and various library systems, of which the Dewey Decimal Classification is an example. This still fits with the definition of components that are connected together (in this case to facilitate the flow of information).
System can also refer to a framework, aka platform, be it software or hardware, designed to allow software programs to run. A flaw in a component or system can cause the component itself or an entire system to fail to perform its required function, e.g., an incorrect statement or data definition.
=== Engineering and physics ===
In engineering and physics, a physical system is the portion of the universe that is being studied (of which a thermodynamic system is one major example). Engineering also has the concept of a system referring to all of the parts and interactions between parts of a complex project. Systems engineering is the branch of engineering that studies how this type of system should be planned, designed, implemented, built, and maintained.
=== Sociology, cognitive science and management research ===
Social and cognitive sciences recognize systems in models of individual humans and in human societies. They include human brain functions and mental processes as well as normative ethics systems and social and cultural behavioral patterns.
In management science, operations research and organizational development, human organizations are viewed as management systems of interacting components such as subsystems or system aggregates, which are carriers of numerous complex business processes (organizational behaviors) and organizational structures. Organizational development theorist Peter Senge developed the notion of organizations as systems in his book The Fifth Discipline.
Organizational theorists such as Margaret Wheatley have also described the workings of organizational systems in new metaphoric contexts, such as quantum physics, chaos theory, and the self-organization of systems.
=== Pure logic ===
There is also such a thing as a logical system. An obvious example is the calculus developed simultaneously by Leibniz and Isaac Newton. Another example is George Boole's Boolean operators. Other examples relate specifically to philosophy, biology, or cognitive science. Maslow's hierarchy of needs applies psychology to biology by using pure logic. Numerous psychologists, including Carl Jung and Sigmund Freud developed systems that logically organize psychological domains, such as personalities, motivations, or intellect and desire.
=== Strategic thinking ===
In 1988, military strategist John A. Warden III introduced the Five Ring System model in his book, The Air Campaign, contending that any complex system could be broken down into five concentric rings. Each ring—leadership, processes, infrastructure, population and action units—could be used to isolate key elements of any system that needed change. The model was used effectively by Air Force planners in the Iran–Iraq War. In the late 1990s, Warden applied his model to business strategy.
== See also ==
Complexity
Complexity theory and organizations
Formal system
Glossary of systems theory
Market (economics)
Meta-system
System of systems
System of systems engineering
Systems art
Systems in the human body
== References ==
== Bibliography ==
== External links ==
Definitions of Systems and Models by Michael Pidwirny, 1999–2007. | Wikipedia/Systems |
Materials management is a core supply chain function and includes supply chain planning and supply chain execution capabilities. Specifically, materials management is the capability firms use to plan total material requirements. The material requirements are communicated to procurement and other functions for sourcing. Materials management is also responsible for determining the amount of material to be deployed at each stocking location across the supply chain, establishing material replenishment plans, determining inventory levels to hold for each type of inventory (raw material, WIP, finished goods), and communicating information regarding material needs throughout the extended supply chain.
== Supply chain materials management areas of concentration ==
=== Goals ===
The goal of materials management is to provide an unbroken chain of components for production to manufacture goods on time for customers. The materials department is charged with releasing materials to a supply base, ensuring that the materials are delivered on time to the company using the correct carrier. Materials is generally measured by accomplishing on time delivery to the customer, on time delivery from the supply base, attaining a freight, budget, inventory shrink management, and inventory accuracy. The materials department is also charged with the responsibility of managing new launches.
In some companies materials management is also charged with the procurement of materials by establishing and managing a supply base. In other companies the procurement and management of the supply base is the responsibility of a separate purchasing department. The purchasing department is then responsible for the purchased price variances from the supply base.
In large companies with multitudes of customer changes to the final product there may be a separate logistics department that is responsible for all new acquisition launches and customer changes. This logistics department ensures that the launch materials are procured for production and then transfers the responsibility to the plant materials management.
=== Materials management ===
The major challenge that materials managers face is maintaining a consistent flow of materials for production. There are many factors that inhibit the accuracy of inventory which results in production shortages, premium freight, and often inventory adjustments.
The major issues that all materials managers face are incorrect bills of materials, inaccurate cycle counts, unreported scrap, shipping errors, receiving errors, and production reporting errors. Materials managers have striven to determine how to manage these issues in the business sectors of manufacturing since the beginning of the industrial revolution.
== Materials management in construction ==
Materials typically account for a large portion of a construction project's budget. Materials may account for more than 70% of a construction project's cost. Despite these statistics, when project budgets and efficiency are considered, labour and cost reduction are discussed. Materials management often gets overlooked, even though successful projects are a result of a successful blend of labour, materials and equipment management. When materials are tracked efficiently project time can be optimized, costs can be saved and quality can be maximized.
There is a lack of efficient materials management in capital and investment construction projects, because each project is typically viewed as an individual effort, with each project needing a unique plan. The geographical location and technology needed for different projects will present distinctive challenges, but in general all projects will have elements that can be predicted from previous construction projects.
=== Types of construction projects and how this effects materials management ===
Typically, the more technically challenging a project is, the more difficult materials management becomes; however, the need for transparent materials tracking is highlighted in these types of projects.
Residential construction projects: residential projects can be homes or apartment buildings, that are intended for living. Managing material flows in these projects is usually easier, because typically engineering and construction teams as well as budgets are smaller, in comparison to projects listed later in this article. Also, technical specifications for residential projects do not vary as much as, for example, in heavy-industry construction projects.
Commercial construction projects: these types of projects include retail stores, restaurants and hotels. The complexity of the project and the needs for thorough material tracking will typically depend on the size of the project.
Specialized industrial construction projects: these projects are large-scale and technically complex. Examples of these types of projects include nuclear power plants, chemical processing plants, steel mills, pulp mills and oil refineries. The materials procured for these projects require specific engineering knowledge (i.e. piping, valves, motors, industrial tanks, fans, boilers, control valves etc.). The importance of material tracking in these types of projects is extremely high, because the project network is large, materials are procured from all over the world and the construction sites are typically in remote locations with poor infrastructure.
Industrial construction projects: examples of industrial construction projects include warehouses and manufacturing facilities. These types of projects tend to be slightly more complex than residential or commercial construction projects and they require more technical knowledge. This increases the need for efficient materials management.
=== Materials management in capital-heavy construction projects ===
Materials management is the process of planning and controlling material flows. It includes planning and procuring materials, supplier evaluation and selection, purchasing, expenditure, shipping, receipt processes for materials (including quality control), warehousing and inventory, and materials distribution. After the construction project finishes, maintenance of materials can also be looked as a part of materials management.
Material management processes and functions in large-scale capital projects encompass multiple organizations and integrated processes. Capital project supply networks typically include project owners, main contractors, EPC/M contractors, material suppliers, logistics partners and project site contractors.
=== Digital tools for materials management in construction ===
It is very common to use digital tools for materials management in capital projects. Materials requirement planning systems and procurement systems are well-liked in the industry. Minimizing procurement costs through comparing bids is an essential part of reducing projects costs. Computer-based systems are an excellent tool during the purchasing process, because equipment specification, supplier selection, delivery time guarantees, shipping fees and multiple other aspects of procurements can be automatically compared on one platform.
Material deliveries from the supplier to the construction site can be tracked using various tools. For example, project freight forwards will typically have precise information on containers and deliveries sent to the construction site, but typically their systems lack insight into the specific materials and components within those deliveries. Details on packing lists will be attached to the packages in the delivery and they will typically be sent to the purchaser via email. Other ways of tracking deliveries include RFID-tagging packages or components. The downfall with this method is that suppliers or purchasers have to invest in RFID-tags. Common materials data-bases for the project network can also be implemented to share data on material deliveries.
Once the materials arrive at the construction site, receipt processes for the goods should be followed. The storage locations should be recorded, so that individual components are easy to locate as construction sites. Inventory of the goods should also be monitored (when goods are taken for assembly). Storing procured materials appropriately is crucial for saving costs. For example, if electronical equipment is procured and delivered to the construction site in one lot to save costs on multiple delivery fees, the electrical equipment that is not needed for assembly immediately has to be stored in water-proof locations. Digital tools can be used to plan for incoming deliveries and how to store them. The need for digital tools is furthermore highlighted, if materials are stored for example in contractor warehouses rather than the construction site. This way all project parties will know, where goods can be located.
== See also ==
== References ==
== Further reading ==
== External links ==
Indian Institute of Materials Management
Association for Healthcare Resource & Materials Management (AHRMM)
(APICS)
Inventory Management System | Wikipedia/Materials_management |
Computers are used to generate numeric models for the purpose of describing or displaying complex interaction among multiple variables within a system. The complexity of the system arises from the stochastic (probabilistic) nature of the events, rules for the interaction of the elements and the difficulty in perceiving the behavior of the systems as a whole with the passing of time.
== Systems Simulation in Video Games ==
One of the most notable video games to incorporate systems simulation is Sim City, which simulates the multiple systems of a functioning city including but not limited to: electricity, water, sewage, public transportation, population growth, social interactions (including, but not limited to jobs, education and emergency response).
== See also ==
Agent-based model
Discrete event simulation
NetLogo
Systems Dynamics
== References ==
== External links ==
A Brief Introduction to Systems Simulation
Resources and Courses in Systems Simulation
Guide to the Winter Simulation Conference Collection 1968-2003, 2013-2014 | Wikipedia/Systems_simulation |
Industrial and production engineering (IPE) is an interdisciplinary engineering discipline that includes manufacturing technology, engineering sciences, management science, and optimization of complex processes, systems, or organizations. It is concerned with the understanding and application of engineering procedures in manufacturing processes and production methods. Industrial engineering dates back all the way to the industrial revolution, initiated in 1700s by Sir Adam Smith, Henry Ford, Eli Whitney, Frank Gilbreth and Lilian Gilbreth, Henry Gantt, F.W. Taylor, etc. After the 1970s, industrial and production engineering developed worldwide and started to widely use automation and robotics. Industrial and production engineering includes three areas: Mechanical engineering (where the production engineering comes from), industrial engineering, and management science.
The objective is to improve efficiency, drive up effectiveness of manufacturing, quality control, and to reduce cost while making their products more attractive and marketable. Industrial engineering is concerned with the development, improvement, and implementation of integrated systems of people, money, knowledge, information, equipment, energy, materials, as well as analysis and synthesis. The principles of IPE include mathematical, physical and social sciences and methods of engineering design to specify, predict, and evaluate the results to be obtained from the systems or processes currently in place or being developed. The target of production engineering is to complete the production process in the smoothest, most-judicious and most-economic way. Production engineering also overlaps substantially with manufacturing engineering and industrial engineering. The concept of production engineering is interchangeable with manufacturing engineering.
As for education, undergraduates normally start off by taking courses such as physics, mathematics (calculus, linear analysis, differential equations), computer science, and chemistry. Undergraduates will take more major specific courses like production and inventory scheduling, process management, CAD/CAM manufacturing, ergonomics, etc., towards the later years of their undergraduate careers. In some parts of the world, universities will offer Bachelor's in Industrial and Production Engineering. However, most universities in the U.S. will offer them separately. Various career paths that may follow for industrial and production engineers include: Plant Engineers, Manufacturing Engineers, Quality Engineers, Process Engineers and industrial managers, project management, manufacturing, production and distribution, From the various career paths people can take as an industrial and production engineer, most average a starting salary of at least $50,000.
== History ==
=== Industrial Revolution ===
The roots of the industrial engineering profession date back to the Industrial Revolution. The technologies that helped mechanize traditional manual operations in the textile industry including the Flying shuttle, the Spinning jenny, and perhaps most importantly the Steam engine generated Economies of scale that made Mass production in centralized locations attractive for the first time. The concept of the production system had its genesis in the factories created by these innovations.
=== Specialization of labor ===
Adam Smith's concepts of division of labour and the "invisible hand" of capitalism introduced in his treatise "The Wealth of Nations" motivated many of the technological innovators of the Industrial Revolution to establish and implement factory systems. The efforts of James Watt and Matthew Boulton led to the first integrated machine manufacturing facility in the world, including the implementation of concepts such as cost control systems to reduce waste and increase productivity and the institution of skills training for craftsmen.
Charles Babbage became associated with industrial engineering because of the concepts he introduced in his book "On the Economy of Machinery and Manufacturers" which he wrote as a result of his visits to factories in England and the United States in the early 1800s. The book includes subjects such as the time required to perform a specific task, the effects of subdividing tasks into smaller and less detailed elements, and the advantages to be gained from repetitive tasks.
=== Interchangeable parts ===
Eli Whitney and Simeon North proved the feasibility of the notion of interchangeable parts in the manufacture of muskets and pistols for the US Government. Under this system, individual parts were mass-produced to tolerances to enable their use in any finished product. The result was a significant reduction in the need for skill from specialized workers, which eventually led to the industrial environment to be studied later.
=== Modern development ===
==== Industrial engineering ====
In 1960 to 1975, with the development of decision support systems in supply such as the Material requirements planning (MRP), people can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide.
In the seventies, with the penetration of Japanese management theories such as Kaizen and Kanban, Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own Continuous improvement programs.
In the nineties, following the global industry globalization process, the emphasis was on supply chain management, and customer-oriented business process design. Theory of constraints developed by an Israeli scientist Eliyahu M. Goldratt (1985) is also a significant milestone in the field.
==== Manufacturing (production) engineering ====
Modern manufacturing engineering studies include all intermediate processes required for the production and integration of a product's components.Some industries, such as semiconductor and steel manufacturers use the term "fabrication" for these processes.
Automation is used in different processes of manufacturing such as machining and welding. Automated manufacturing refers to the application of automation to produce goods in a factory. The main advantages of automated manufacturing for the manufacturing process are realized with effective implementation of automation and include: higher consistency and quality, reduction of lead times, simplification of production, reduced handling, improved work flow, and improved worker morale.
Robotics is the application of mechatronics and automation to create robots, which are often used in manufacturing to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in manufacturing engineering.
Robots allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform economically, and to ensure better quality. Many companies employ assembly lines of robots, and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications.
== Overview ==
=== Industrial engineering ===
Industrial engineering is the branch of engineering that involves figuring out how to make or do things better. Industrial engineers are concerned with reducing production costs, increasing efficiency, improving the quality of products and services, ensuring worker health and safety, protecting the environment and complying with government regulations.
The various fields and topics that industrial engineers are involved with include:
Manufacturing engineering
Engineering management
Process engineering: design, operation, control, and optimization of chemical, physical, and biological processes.
Systems engineering: an interdisciplinary field of engineering that focuses on how to design and manage complex engineering systems over their life cycles.
Software engineering: an interdisciplinary field of engineering that focusing on design, development, maintenance, testing, and evaluation of the software that make computers or other devices containing software work
Safety engineering: an engineering discipline which assures that engineered systems provide acceptable levels of safety.
Data science: the science of exploring, manipulating, analyzing, and visualizing data to derive useful insights and conclusions
Machine learning: the automation of learning from data using models and algorithms
Analytics and data mining: the discovery, interpretation, and extraction of patterns and insights from large quantities of data
Cost engineering: practice devoted to the management of project cost, involving such activities as cost- and control- estimating, which is cost control and cost forecasting, investment appraisal, and risk analysis.
Value engineering: a systematic method to improve the "value" of goods or products and services by using an examination of function.
Predetermined motion time system: a technique to quantify time required for repetitive tasks.
Quality engineering: a way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers.
Project management: is the process and activity of planning, organizing, motivating, and controlling resources, procedures and protocols to achieve specific goals in scientific or daily problems.
Supply chain management: the management of the flow of goods. It includes the movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption.
Ergonomics: the practice of designing products, systems or processes to take proper account of the interaction between them and the people that use them.
Operations research, also known as management science: discipline that deals with the application of advanced analytical methods to help make better decisions
Operations management: an area of management concerned with overseeing, designing, and controlling the process of production and redesigning business operations in the production of goods or services.
Job design: the specification of contents, methods and relationship of jobs in order to satisfy technological and organizational requirements as well as the social and personal requirements of the job holder.
Financial engineering: the application of technical methods, especially from mathematical finance and computational finance, in the practice of finance
Industrial plant configuration: sizing of necessary infrastructure used in support and maintenance of a given facility.
Facility management: an interdisciplinary field devoted to the coordination of space, infrastructure, people and organization
Engineering design process: formulation of a plan to help an engineer build a product with a specified performance goal.
Logistics: the management of the flow of goods between the point of origin and the point of consumption in order to meet some requirements, of customers or corporations.
Accounting: the measurement, processing and communication of financial information about economic entities
Capital projects: the management of activities in capital projects involves the flow of resources, or inputs, as they are transformed into outputs. Many of the tools and principles of industrial engineering can be applied to the configuration of work activities within a project. The application of industrial engineering and operations management concepts and techniques to the execution of projects has been thus referred to as Project Production Management. Traditionally, a major aspect of industrial engineering was planning the layouts of factories and designing assembly lines and other manufacturing paradigms. And now, in lean manufacturing systems, industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.
Examples of where industrial engineering might be used include flow process charting, process mapping, designing an assembly workstation, strategizing for various operational logistics, consulting as an efficiency expert, developing a new financial algorithm or loan system for a bank, streamlining operation and emergency room location or usage in a hospital, planning complex distribution schemes for materials or products (referred to as supply-chain management), and shortening lines (or queues) at a bank, hospital, or a theme park.
Modern industrial engineers typically use predetermined motion time system, computer simulation (especially discrete event simulation), along with extensive mathematical tools for modeling, such as mathematical optimization and queueing theory, and computational methods for system analysis, evaluation, and optimization. Industrial engineers also use the tools of data science and machine learning in their work owing to the strong relatedness of these disciplines with the field and the similar technical background required of industrial engineers (including a strong foundation in probability theory, linear algebra, and statistics, as well as having coding skills).
=== Manufacturing (production) engineering ===
Manufacturing Engineering is based on core industrial engineering and mechanical engineering skills, adding important elements from mechatronics, commerce, economics and business management. This field also deals with the integration of different facilities and systems for producing quality products (with optimal expenditure) by applying the principles of physics and the results of manufacturing systems studies, such as the following:
Manufacturing engineers develop and create physical artifacts, production processes, and technology. It is a very broad area which includes the design and development of products. Manufacturing engineering is considered to be a sub-discipline of industrial engineering/systems engineering and has very strong overlaps with mechanical engineering. Manufacturing engineers' success or failure directly impacts the advancement of technology and the spread of innovation. This field of manufacturing engineering emerged from tool and die discipline in the early 20th century. It expanded greatly from the 1960s when industrialized countries introduced factories with:
1. Numerical control machine tools and automated systems of production.
2. Advanced statistical methods of quality control: These factories were pioneered by the American electrical engineer William Edwards Deming, who was initially ignored by his home country. The same methods of quality control later turned Japanese factories into world leaders in cost-effectiveness and production quality.
3. Industrial robots on the factory floor, introduced in the late 1970s: These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This cut costs and improved production speed.
== Education ==
=== Industrial engineering ===
==== Undergraduate curriculum ====
In the United States the undergraduate degree earned is the Bachelor of Science (B.S.) or Bachelor of Science and Engineering (B.S.E.) in Industrial Engineering (IE). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE). The typical curriculum includes a broad math and science foundation spanning chemistry, physics, mechanics (i.e., statics, kinematics, and dynamics), materials science, computer science, electronics/circuits, engineering design, and the standard range of engineering mathematics (i.e. calculus, linear algebra, differential equations, statistics). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work – which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions.
The coursework specific to IE entails specialized courses in areas such as optimization, applied probability, stochastic modeling, design of experiments, statistical process control, simulation, manufacturing engineering, ergonomics/safety engineering, and engineering economics. Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing, supply chains and logistics, analytics and machine learning, production systems, human factors and industrial design, and service systems.
Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs.
==== Graduate curriculum ====
The usual graduate degree earned is the Master of Science (MS) or Master of Science and Engineering (MSE) in Industrial Engineering or various alternative related concentration titles. Typical MS curricula may cover:
=== Manufacturing (production) engineering ===
==== Degree certification programs ====
Manufacturing engineers possess an associate's or bachelor's degree in engineering with a major in manufacturing engineering. The length of study for such a degree is usually two to five years followed by five more years of professional practice to qualify as a professional engineer. Working as a manufacturing engineering technologist involves a more applications-oriented qualification path.
Academic degrees for manufacturing engineers are usually the Associate or Bachelor of Engineering, [BE] or [BEng], and the Associate or Bachelor of Science, [BS] or [BSc]. For manufacturing technologists the required degrees are Associate or Bachelor of Technology [B.TECH] or Associate or Bachelor of Applied Science [BASc] in Manufacturing, depending upon the university. Master's degrees in engineering manufacturing include Master of Engineering [ME] or [MEng] in Manufacturing, Master of Science [M.Sc] in Manufacturing Management, Master of Science [M.Sc] in Industrial and Production Management, and Master of Science [M.Sc] as well as Master of Engineering [ME] in Design, which is a subdiscipline of manufacturing. Doctoral [PhD] or [DEng] level courses in manufacturing are also available depending on the university.
The undergraduate degree curriculum generally includes courses in physics, mathematics, computer science, project management, and specific topics in mechanical and manufacturing engineering. Initially such topics cover most, if not all, of the subdisciplines of manufacturing engineering. Students then choose to specialize in one or more sub disciplines towards the end of their degree work.
Specific to Industrial Engineers, people will see courses covering ergonomics, scheduling, inventory management, forecasting, product development, and in general courses that focus on optimization. Most colleges breakdown the large sections of industrial engineering into Healthcare, Ergonomics, Product Development, or Consulting sectors. This allows for the student to get a good grasp on each of the varying sub-sectors so they know what area they are most interested about pursuing a career in.
==== Undergraduate curriculum ====
The Foundational Curriculum for a bachelor's degree of Manufacturing Engineering or Production Engineering includes below mentioned Syllabus. This Syllabus is closely related to Industrial Engineering and Mechanical Engineering. But it Differs by Placing more Emphasis on Manufacturing Science or Production Science. It includes following:
A degree in Manufacturing Engineering versus Mechanical Engineering will typically differ only by a few specialized classes. Mechanical Engineering degree focuses more on the Product Design Process and on Complex Products which requires more Mathematics Expertise.
== Manufacturing engineering certification ==
=== Professional engineering license ===
A Professional Engineer, PE, is a licensed engineer who is permitted to offer professional services to the public. Professional Engineers may prepare, sign, seal, and submit engineering plans to the public. Before a candidate can become a professional engineer, they will need to receive a bachelor's degree from an ABET recognized university in the US, take and pass the Fundamentals of Engineering exam to become an "engineer-in-training", and work four years under the supervision of a professional engineer. After those tasks are complete the candidate will be able to take the PE exam. Upon receiving a passing score on the test, the candidate will receive their PE License .
=== Society of Manufacturing Engineers (SME) certifications (USA) ===
The SME (society) administers qualifications specifically for the manufacturing industry. These are not degree level qualifications and are not recognized at the professional engineering level. The SME offers two certifications for Manufacturing engineers: Certified Manufacturing Technologist Certificate (CMfgT) and Certified Manufacturing Engineer (CMfgE).
==== Certified manufacturing technologist ====
Qualified candidates for the Certified Manufacturing Technologist Certificate (CMfgT) must pass a three-hour, 130-question multiple-choice exam. The exam covers math, manufacturing processes, manufacturing management, automation, and related subjects. A score of 60% or higher must be achieved to pass the exam. Additionally, a candidate must have at least four years of combined education and manufacturing-related work experience. The CMfgT certification must be renewed every three years in order to stay certified.
==== Certified manufacturing engineer ====
Certified Manufacturing Engineer (CMfgE) is an engineering qualification administered by the Society of Manufacturing Engineers, Dearborn, Michigan, USA. Candidates qualifying for a Certified Manufacturing Engineer credential must pass a four-hour, 180 question multiple-choice exam which covers more in-depth topics than does the CMfgT exam. A score of 60% or higher must be achieved to pass the exam. CMfgE candidates must also have eight years of combined education and manufacturing-related work experience, with a minimum of four years of work experience. The CMfgT certification must be renewed every three years in order to stay certified.
== Research ==
=== Industrial engineering ===
==== Human factors ====
The human factors area specializes in exploring how systems fit the people who must operate them, determining the roles of people with the systems, and selecting those people who can best fit particular roles within these systems. Students who focus on Human Factors will be able to work with a multidisciplinary team of faculty with strengths in understanding cognitive behavior as it relates to automation, air and ground transportation, medical studies, and space exploration.
==== Production systems ====
The production systems area develops new solutions in areas such as engineering design, supply chain management (e.g. supply chain system design, error recovery, large scale systems), manufacturing (e.g. system design, planning and scheduling), and medicine (e.g. disease diagnosis, discovery of medical knowledge). Students who focus on production systems will be able to work on topics related to computational intelligence theories for applications in industry, healthcare, and service organizations.
==== Reliability systems ====
The objective of the reliability systems area is to provide students with advanced data analysis and decision making techniques that will improve quality and reliability of complex systems. Students who focus on system reliability and uncertainty will be able to work on areas related to contemporary reliability systems including integration of quality and reliability, simultaneous life cycle design for manufacturing systems, decision theory in quality and reliability engineering, condition-based maintenance and degradation modeling, discrete event simulation and decision analysis.
==== Wind power management ====
The Wind Power Management Program aims at meeting the emerging needs for graduating professionals involved in design, operations, and management of wind farms deployed in massive numbers all over the country. The graduates will be able to fully understand the system and management issues of wind farms and their interactions with alternative and conventional power generation systems.
=== Production (manufacturing) engineering ===
==== Flexible manufacturing systems ====
A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react to changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, both of which have numerous subcategories. The first category, machine flexibility, covers the system's ability to be changed to produce new product types and the ability to change the order of operations executed on a part. The second category, called routing flexibility, consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability.
Most FMS systems comprise three main systems. The work machines, which are often automated CNC machines, are connected by a material handling system to optimize parts flow, and to a central control computer, which controls material movements and machine flow. The main advantages of an FMS is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product. The best application of an FMS is found in the production of small sets of products from a mass production.
==== Computer integrated manufacturing ====
Computer-integrated manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. Traditionally separated process methods are joined through a computer by CIM. This integration allows the processes to exchange information and to initiate actions. Through this integration, manufacturing can be faster and less error-prone, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes based on real-time input from sensors. It is also known as flexible design and manufacturing.
==== Friction stir welding ====
Friction stir welding was discovered in 1991 by The Welding Institute (TWI). This innovative steady state (non-fusion) welding technique joins previously un-weldable materials, including several aluminum alloys. It may play an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include: welding the seams of the aluminum main space shuttle external tank, the Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket; armor plating for amphibious assault ships; and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation, among an increasingly growing range of uses.
== Employment ==
=== Industrial engineering ===
The total number of engineers employed in the US in 2015 was roughly 1.6 million. Of these, 272,470 were industrial engineers (16.92%), the third most popular engineering specialty. The median salaries by experience level are $62,000 with 0–5 years experience, $75,000 with 5–10 years experience, and $81,000 with 10–20 years experience. The average starting salaries were $55,067 with a bachelor's degree, $77,364 with a master's degree, and $100,759 with a doctorate degree. This places industrial engineering at 7th of 15 among engineering bachelor's degrees, 3rd of 10 among master's degrees, and 2nd of 7 among doctorate degrees in average annual salary. The median annual income of industrial engineers in the U.S. workforce is $83,470.
=== Production (manufacturing) engineering ===
Manufacturing engineering is just one facet of the engineering industry. Manufacturing engineers enjoy improving the production process from start to finish. They have the ability to keep the whole production process in mind as they focus on a particular portion of the process. Successful students in manufacturing engineering degree programs are inspired by the notion of starting with a natural resource, such as a block of wood, and ending with a usable, valuable product, such as a desk, produced efficiently and economically.
Manufacturing engineers are closely connected with engineering and industrial design efforts. Examples of major companies that employ manufacturing engineers in the United States include General Motors Corporation, Ford Motor Company, Chrysler, Boeing, Gates Corporation and Pfizer. Examples in Europe include Airbus, Daimler, BMW, Fiat, Navistar International, and Michelin Tyre.
=== Related industries ===
Industries where industrial and production engineers are generally employed include:
== Modern tools ==
Many manufacturing companies, especially those in industrialized nations, have begun to incorporate computer-aided engineering (CAE) programs, such as SolidWorks and AutoCAD, into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and ease of use in designing mating interfaces and tolerances.
=== SolidWorks ===
SolidWorks is an example of a CAD modeling computer program developed by Dassault Systèmes. SolidWorks is an industry standard for drafting designs and specifications for physical objects and has been used by more than 165,000 companies as of 2013.
=== AutoCAD ===
AutoCAD is an example of a CAD modeling computer program developed by Autodesk. AutoCad is also widely used for CAD modeling and CAE.
Other CAE programs commonly used by product manufacturers include product life cycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. There is no need to create a physical prototype until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of relatively few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows.
Just as manufacturing engineering is linked with other disciplines, such as mechatronics, multidisciplinary design optimization (MDO) is also being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes by automating the process of trial and error method used by classical engineers. MDO uses a computer based algorithm that will iteratively seek better alternatives from an initial guess within given constants. MDO uses this procedure to determine the best design outcome and lists various options as well.
== Sub-disciplines ==
=== Mechanics ===
Classical Mechanics, attempts to use Newtons basic laws of motion to describe how a body will react when that body undergoes a force. However modern mechanics includes the rather recent quantum theory. Sub disciplines of mechanics include:
Classical Mechanics:
Statics, the study of non-moving bodies at equilibrium.
Kinematics, is the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion.
Dynamics (or kinetics), the study of how forces affect moving bodies.
Mechanics of materials, the study of how different materials deform under various types of stress.
Fluid mechanics, the study of how the principles of classical mechanics are observed with liquids and gases.
Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete)
Quantum:
Quantum mechanics, the study of atoms, molecules, electrons, protons, and neutrons on a sub atomic scale. This type of mechanics attempts to explain their motion and physical properties within an atom.
If the engineering project were to design a vehicle, statics might be employed to design the frame of the vehicle in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the manufacture of the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle or to design the intake system for the engine.
=== Drafting ===
Drafting or technical drawing is the means by which manufacturers create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Programs such as SolidWorks and AutoCAD are examples of programs used to draft new parts and products under development.
Optionally, an engineer may also manually manufacture a part using the technical drawings, but this is becoming an increasing rarity with the advent of computer numerically controlled (CNC) manufacturing. Engineers primarily manufacture parts manually in the areas of applied spray coatings, finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every sub discipline of mechanical and manufacturing engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).
=== Metal fabrication and machine tools ===
Metal fabrication is the building of metal structures by cutting, bending, and assembling processes. Technologies such as electron beam melting, laser engineered net shape, and direct metal laser sintering has allowed for the production of metal structures to become much less difficult when compared to other conventional metal fabrication methods. These help to alleviate various issues when the idealized CAD structures do not align with the actual fabricated structure.
Machine tools employ many types of tools that do the cutting or shaping of materials. Machine tools usually include many components consisting of motors, levers, arms, pulleys, and other basic simple systems to create a complex system that can build various things. All of these components must work correctly in order to stay on schedule and remain on task. Machine tools aim to efficiently and effectively produce good parts at a quick pace with a small amount of error.
=== Computer integrated manufacturing ===
Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. Computer-integrated manufacturing allows for data, through various sensing mechanisms to be observed during manufacturing. This type of manufacturing has computers controlling and observing every part of the process. This gives CIM a unique advantage over other manufacturing processes.
=== Mechatronics ===
Mechatronics is an engineering discipline that deals with the convergence of electrical, mechanical and manufacturing systems. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various aircraft and automobile subsystems. A mechatronic system typically includes a mechanical skeleton, motors, controllers, sensors, actuators, and digital hardware. Mechatronics is greatly used in various applications of industrial processes and in automation.
The term mechatronics is typically used to refer to macroscopic systems, but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to initiate the deployment of airbags, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high-definition printing. In future it is hoped that such devices will be used in tiny implantable medical devices and to improve optical communication.
=== Textile engineering ===
Textile engineering courses deal with the application of scientific and engineering principles to the design and control of all aspects of fiber, textile, and apparel processes, products, and machinery. These include natural and man-made materials, interaction of materials with machines, safety and health, energy conservation, and waste and pollution control. Additionally, students are given experience in plant design and layout, machine and wet process design and improvement, and designing and creating textile products. Throughout the textile engineering curriculum, students take classes from other engineering and disciplines including: mechanical, chemical, materials and industrial engineering.
=== Advanced composite materials ===
Advanced composite materials (engineering) (ACMs) are also known as advanced polymer matrix composites. These are generally characterized or determined by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors. Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. Manufacturing ACMs is a multibillion-dollar industry worldwide. Composite products range from skateboards to components of the space shuttle. The industry can be generally divided into two basic segments, industrial composites and advanced composites.
== See also ==
Associations
American Society for Engineering Education
American Society for Quality
European Students of Industrial Engineering and Management (ESTIEM)
Indian Institution of Industrial Engineering
Institute for Operations Research and the Management Sciences (INFORMS)
Institute of Industrial Engineers
Institution of Electrical Engineers
Society of Manufacturing Engineers
== References == | Wikipedia/Industrial_and_production_engineering |
Production planning is the planning of production and manufacturing modules in a company or industry. It utilizes the resource allocation of activities of employees, materials and production capacity, in order to serve different customers.
Different types of production methods, such as single item manufacturing, batch production, mass production, continuous production etc. have their own type of production planning. Production planning can be combined with production control into production planning and control, or it can be combined with enterprise resource planning.
== Overview ==
Production planning is the future of production. It can help in efficient manufacturing or setting up of a production site by facilitating required needs. A production plan is made periodically for a specific time period, called the planning horizon. It can comprise the following activities:
Determination of the required product mix and factory load to satisfy customers needs.
Matching the required level of production to the existing resources.
Scheduling and choosing the actual work to be started in the manufacturing facility"
Setting up and delivering production orders to production facilities.
In order to develop production plans, the production planner or production planning department needs to work closely together with the marketing department and sales department. They can provide sales forecasts, or a listing of customer orders." The "work is usually selected from a variety of product types which may require different resources and serve different customers. Therefore, the selection must optimize customer-independent performance measures such as cycle time and customer-dependent performance measures such as on-time delivery."
A critical factor in production planning is "the accurate estimation of the productive capacity of available resources, yet this is one of the most difficult tasks to perform well". Production planning should always take "into account material availability, resource availability and knowledge of future demand".
== History ==
Modern production planning methods and tools have been developed since late 19th century. Under Scientific Management, the work for each man or each machine is mapped out in advance (see image). The origin of production planning back goes another century. Kaplan (1986) summarized that "the demand for information for internal planning and control apparently arose in the first half of the 19th century when firms, such as textile mills and railroads, had to devise internal administrative procedures to coordinate the multiple processes involved in the performance of the basic activity (the conversion of raw materials into finished goods by textile mills, the transportation of passengers and freight by the railroads."
Herrmann (1996) further describes the circumstances in which new methods for internal planning and control evolved: "The first factories were quite simple and relatively small. They produced a small number of products in large batches. Productivity gains came from using interchangeable parts to eliminate time-consuming fitting operations. Through the late 1800s, manufacturing firms were concerned with maximizing the productivity of the expensive equipment in the factory. Keeping utilization high was an important objective. Foremen ruled their shops, coordinating all of the activities needed for the limited number of products for which they were responsible. They hired operators, purchased materials, managed production, and delivered the product. They were experts with superior technical skills, and they (not a separate staff of clerks) planned production. Even as factories grew, they were just bigger, not more complex.
About production planning Herrmann (1996) recounts that "production scheduling started simply also. Schedules, when used at all, listed only when work on an order should begin or when the order is due. They didn't provide any information about how long the total order should take or about the time required for individual operations ..."
In 1923 Industrial Management cited a Mr. Owens who had observed: "Production planning is rapidly becoming one of the most vital necessities of management. It is true that every establishment, no matter how large or how small has production planning in some form; but a large percentage of these do not have planning that makes for an even flow of material, and a minimum amount of money tied up in inventories."
== Topics ==
=== Types of planning ===
Different types of production planning can be applied:
Advanced planning and scheduling
Capacity planning
Master production schedule
Material requirements planning
MRP II (Manufacturing Resources Planning)
Scheduling
Workflow
Related kind of planning in organizations
Employee scheduling
Enterprise resource planning
Inventory control
Product planning
Project planning
Process planning, redirects to Computer-aided process planning
Sales and operations planning
Strategy
=== Production control ===
Production control is the activity of controlling the workflow in the production. It is partly complementary to production planning.
== See also ==
Industrial engineering
Manufacturing process management
Materials management
Operations management
Production engineering
== References == | Wikipedia/Production_planning_and_control |
Energy modeling or energy system modeling is the process of building computer models of energy systems in order to analyze them. Such models often employ scenario analysis to investigate different assumptions about the technical and economic conditions at play. Outputs may include the system feasibility, greenhouse gas emissions, cumulative financial costs, natural resource use, and energy efficiency of the system under investigation. A wide range of techniques are employed, ranging from broadly economic to broadly engineering. Mathematical optimization is often used to determine the least-cost in some sense. Models can be international, regional, national, municipal, or stand-alone in scope. Governments maintain national energy models for energy policy development.
Energy models are usually intended to contribute variously to system operations, engineering design, or energy policy development. This page concentrates on policy models. Individual building energy simulations are explicitly excluded, although they too are sometimes called energy models. IPCC-style integrated assessment models, which also contain a representation of the world energy system and are used to examine global transformation pathways through to 2050 or 2100 are not considered here in detail.
Energy modeling has increased in importance as the need for climate change mitigation has grown in importance. The energy supply sector is the largest contributor to global greenhouse gas emissions. The IPCC reports that climate change mitigation will require a fundamental transformation of the energy supply system, including the substitution of unabated (not captured by CCS) fossil fuel conversion technologies by low-GHG alternatives.
== Model types ==
A wide variety of model types are in use. This section attempts to categorize the key types and their usage. The divisions provided are not hard and fast and mixed-paradigm models exist. In addition, the results from more general models can be used to inform the specification of more detailed models, and vice versa, thereby creating a hierarchy of models. Models may, in general, need to capture "complex dynamics such as:
energy system operation
technology stock turnover
technology innovation
firm and household behaviour
energy and non-energy capital investment and labour market adjustment dynamics leading to economic restructuring
infrastructure deployment and urban planning": S28–S29 : point form added
Models may be limited in scope to the electricity sector or they may attempt to cover an energy system in its entirety (see below).
Most energy models are used for scenario analysis. A scenario is a coherent set of assumptions about a possible system. New scenarios are tested against a baseline scenario – normally business-as-usual (BAU) – and the differences in outcome noted.
The time horizon of the model is an important consideration. Single-year models – set in either the present or the future (say 2050) – assume a non-evolving capital structure and focus instead on the operational dynamics of the system. Single-year models normally embed considerable temporal (typically hourly resolution) and technical detail (such as individual generation plant and transmissions lines). Long-range models – cast over one or more decades (from the present until say 2050) – attempt to encapsulate the structural evolution of the system and are used to investigate capacity expansion and energy system transition issues.
Models often use mathematical optimization to solve for redundancy in the specification of the system. Some of the techniques used derive from operations research. Most rely on linear programming (including mixed-integer programming), although some use nonlinear programming. Solvers may use classical or genetic optimisation, such as CMA-ES. Models may be recursive-dynamic, solving sequentially for each time interval, and thus evolving through time. Or they may be framed as a single forward-looking intertemporal problem, and thereby assume perfect foresight. Single-year engineering-based models usually attempt to minimize the short-run financial cost, while single-year market-based models use optimization to determine market clearing. Long-range models, usually spanning decades, attempt to minimize both the short and long-run costs as a single intertemporal problem.
The demand-side (or end-user domain) has historically received relatively scant attention, often modeled by just a simple demand curve. End-user energy demand curves, in the short-run at least, are normally found to be highly inelastic.
As intermittent energy sources and energy demand management grow in importance, models have needed to adopt an hourly temporal resolution in order to better capture their real-time dynamics. Long-range models are often limited to calculations at yearly intervals, based on typical day profiles, and are hence less suited to systems with significant variable renewable energy. Day-ahead dispatching optimization is used to aid in the planning of systems with a significant portion of intermittent energy production in which uncertainty around future energy predictions is accounted for using stochastic optimization.
Implementing languages include GAMS, MathProg, MATLAB, Mathematica, Python, Pyomo, R, Fortran, Java, C, C++, and Vensim. Occasionally spreadsheets are used.
As noted, IPCC-style integrated models (also known as integrated assessment models or IAM) are not considered here in any detail. Integrated models combine simplified sub-models of the world economy, agriculture and land-use, and the global climate system in addition to the world energy system. Examples include GCAM, MESSAGE, and REMIND.
Published surveys on energy system modeling have focused on techniques, general classification, an overview, decentralized planning, modeling methods, renewables integration, energy efficiency policies, electric vehicle integration, international development, and the use of layered models to support climate protection policy. Deep Decarbonization Pathways Project researchers have also analyzed model typologies.: S30–S31 A 2014 paper outlines the modeling challenges ahead as energy systems become more complex and human and social factors become increasingly relevant.
=== Electricity sector models ===
Electricity sector models are used to model electricity systems. The scope may be national or regional, depending on circumstances. For instance, given the presence of national interconnectors, the western European electricity system may be modeled in its entirety.
Engineering-based models usually contain a good characterization of the technologies involved, including the high-voltage AC transmission grid where appropriate. Some models (for instance, models for Germany) may assume a single common bus or "copper plate" where the grid is strong. The demand-side in electricity sector models is typically represented by a fixed load profile.
Market-based models, in addition, represent the prevailing electricity market, which may include nodal pricing.
Game theory and agent-based models are used to capture and study strategic behavior within electricity markets.
=== Energy system models ===
In addition to the electricity sector, energy system models include the heat, gas, mobility, and other sectors as appropriate. Energy system models are often national in scope, but may be municipal or international.
So-called top-down models are broadly economic in nature and based on either partial equilibrium or general equilibrium. General equilibrium models represent a specialized activity and require dedicated algorithms. Partial equilibrium models are more common.
So-called bottom-up models capture the engineering well and often rely on techniques from operations research. Individual plants are characterized by their efficiency curves (also known as input/output relations), nameplate capacities, investment costs (capex), and operating costs (opex). Some models allow for these parameters to depend on external conditions, such as ambient temperature.
Producing hybrid top-down/bottom-up models to capture both the economics and the engineering has proved challenging.
== Established models ==
This section lists some of the major models in use. These are typically run by national governments.
In a community effort, a large number of existing energy system models were collected in model fact sheets on the Open Energy Platform.
=== LEAP ===
LEAP, the Low Emissions Analysis Platform (formerly known as the Long-range Energy Alternatives Planning System) is a software tool for energy policy analysis, air pollution abatement planning and climate change mitigation assessment.
LEAP was developed at the Stockholm Environment Institute's (SEI) US Center. LEAP can be used to examine city, statewide, national, and regional energy systems. LEAP is normally used for studies of between 20–50 years. Most of its calculations occur at yearly intervals. LEAP allows policy analysts to create and evaluate alternative scenarios and to compare their energy requirements, social costs and benefits, and environmental impacts. As of June 2021, LEAP has over 6000 users in 200 countries and territories
=== Power system simulation ===
General Electric's MAPS (Multi-Area Production Simulation) is a production simulation model used by various Regional Transmission Organizations and Independent System Operators in the United States to plan for the economic impact of proposed electric transmission and generation facilities in FERC-regulated electric wholesale markets. Portions of the model may also be used for the commitment and dispatch phase (updated on 5 minute intervals) in operation of wholesale electric markets for RTO and ISO regions. ABB's PROMOD is a similar software package. These ISO and RTO regions also utilize a GE software package called MARS (Multi-Area Reliability Simulation) to ensure the power system meets reliability criteria (a loss of load expectation (LOLE) of no greater than 0.1 days per year). Further, a GE software package called PSLF (Positive Sequence Load Flow) and a Siemens software package called PSSE (Power System Simulation for Engineering) analyzes load flow on the power system for short-circuits and stability during preliminary planning studies by RTOs and ISOs.
=== MARKAL/TIMES ===
MARKAL (MARKet ALlocation) is an integrated energy systems modeling platform, used to analyze energy, economic, and environmental issues at the global, national, and municipal level over time-frames of up to several decades. MARKAL can be used to quantify the impacts of policy options on technology development and natural resource depletion. The software was developed by the Energy Technology Systems Analysis Programme (ETSAP) of the International Energy Agency (IEA) over a period of almost two decades.
TIMES (The Integrated MARKAL-EFOM System) is an evolution of MARKAL – both energy models have many similarities. TIMES succeeded MARKAL in 2008. Both models are technology explicit, dynamic partial equilibrium models of energy markets. In both cases, the equilibrium is determined by maximizing the total consumer and producer surplus via linear programming. Both MARKAL and TIMES are written in GAMS.
The TIMES model generator was also developed under the Energy Technology Systems Analysis Program (ETSAP). TIMES combines two different, but complementary, systematic approaches to modeling energy – a technical engineering approach and an economic approach. TIMES is a technology rich, bottom-up model generator, which uses linear programming to produce a least-cost energy system, optimized according to a number of user-specified constraints, over the medium to long-term. It is used for "the exploration of possible energy futures based on contrasted scenarios".: 7
As of 2015, the MARKAL and TIMES model generators are in use in 177 institutions spread over 70 countries.: 5
=== NEMS ===
NEMS (National Energy Modeling System) is a long-standing United States government policy model, run by the Department of Energy (DOE). NEMS computes equilibrium fuel prices and quantities for the US energy sector. To do so, the software iteratively solves a sequence of linear programs and nonlinear equations. NEMS has been used to explicitly model the demand-side, in particular to determine consumer technology choices in the residential and commercial building sectors.
NEMS is used to produce the Annual Energy Outlook each year – for instance in 2015.
== Criticisms ==
Public policy energy models have been criticized for being insufficiently transparent. The source code and data sets should at least be available for peer review, if not explicitly published. To improve transparency and public acceptance, some models are undertaken as open-source software projects, often developing a diverse community as they proceed. OSeMOSYS is an example of such a model. The Open Energy Outlook is an open community that has produced a long-term outlook of the U.S. energy system using the open-source TEMOA model.
Not a criticism per se, but it is necessary to understand that model results do not constitute future predictions.
== See also ==
General
Climate change mitigation – actions to limit long-term climate change
Climate change mitigation scenarios – possible futures in which global warming is reduced by deliberate actions
Economic model
Energy system – the interpretation of the energy sector in system terms
Energy Modeling Forum – a Stanford University-based modeling forum
Open Energy Modelling Initiative – an open source energy modeling initiative, centered on Europe
Open energy system databases – database projects which collect, clean, and republish energy-related datasets
Open energy system models – a review of energy system models that are also open source
Power system simulation
Models
iNEMS (Integrated National Energy Modeling System) – a national energy model for China
MARKAL – an energy model
NEMS – the US government national energy model
POLES (Prospective Outlook on Long-term Energy Systems) – an energy sector world simulation model
KAPSARC Energy Model - an energy sector model for Saudi Arabia
== Further reading ==
Introductory video on open energy system modeling with python language example
Introductory video with reference to public policy
== References ==
== External links ==
COST TD1207 Mathematical Optimization in the Decision Support Systems for Efficient and Robust Energy Networks wiki – a typology for optimization models
EnergyPLAN — a freeware energy model from the Department of Development and Planning, Aalborg University, Denmark
Open Energy Modelling Initiative open models page – a list of open energy models
model.energy — an online "toy" model utilizing the PyPSA framework that allows the public to experiment
Building Energy Modeling Tools by National Renewable Energy Laboratory | Wikipedia/Energy_modelling |
A drop in oil production in the wake of the Iranian revolution led to an energy crisis in 1979. Although the global oil supply only decreased by approximately four percent, the oil markets' reaction raised the price of crude oil drastically over the next 12 months, more than doubling it to $39.50 per barrel ($248/m3). The sudden increase in price was connected with fuel shortages similar to the 1973 oil crisis.
In 1980, following the onset of the Iran–Iraq War, oil production in Iran fell drastically. Iraq's oil production also dropped significantly, triggering economic recessions worldwide. Oil prices did not return to pre-crisis levels until the mid-1980s.
Oil prices after 1980 began a steady decline over the next 20 years, except for a brief uptick during the Gulf War, which then reached a 60% fall-off in the 1990s. Mexico, Nigeria, and Venezuela's major oil exporters expanded their production during this time. The Soviet Union became the largest oil producer in the world, and oil from the North Sea and Alaska flooded the market.
== Iran ==
In November 1978, a strike consisting of 37,000 workers at Iran's nationalized oil refineries reduced production from 6 million barrels (950,000 m3) per day to about 1.5 million barrels (240,000 m3). Foreign workers left the country. However, by bringing navy personnel into crude oil production operations, the government fixed short-term disruptions and by end of November the output came back to almost normal level.
On January 16, 1979, the Shah of Iran, Mohammad Reza Pahlavi, and his wife, Farah Pahlavi, left Iran at the behest of Prime Minister Shapour Bakhtiar, who sought to calm the situation. After the departure of the Shah, Ayatollah Khomeini became the new leader of Iran.
== Effects ==
=== Other OPEC members ===
The rise in oil prices benefited a few members of the Organization of Petroleum-Exporting Countries (OPEC), which made record profits. Under the new Iranian government, oil exports later resumed but production was inconsistent and at a lower volume, further raising prices. Saudi Arabia and other OPEC nations, under the presidency of Mana Al Otaiba, increased production to offset most of the decline, and by early 1979 the overall loss in worldwide production was roughly four percent.
The war between Iran and Iraq in 1980 caused a further 7 percent drop in worldwide production and OPEC production was surpassed by other exporters such as the United States as its member nations were divided amongst themselves. Saudi Arabia, a "swing producer", tried to gain back the market share after 1985, increasing production and causing downward pressure on prices, making high-cost oil production facilities less profitable.
=== United States ===
The oil crisis had a mixed impact on the United States. Richard Nixon had imposed price controls on domestic oil as a result of the 1973 oil crisis. Since then, gasoline price controls had been repealed, but those on domestic oil remained.
The Jimmy Carter administration began a phased deregulation of oil prices on April 5, 1979, when the average price of crude oil was US$15.85 per barrel ($100/m3). Starting with the Iranian revolution, the price of crude oil rose to $39.50 per barrel ($248/m3) over the next 12 months (its all-time highest real price until March 3, 2008). Deregulating domestic oil price controls allowed U.S. oil output to rise sharply from the large Prudhoe Bay fields, while oil imports fell sharply.
Although not directly related, the near-disaster at Three Mile Island on March 28, 1979, also increased anxiety about energy policy and availability. Due to memories of the oil shortage in 1973, motorists soon began panic buying, and long lines appeared at gas stations, as they had six years earlier. The average vehicle of the time consumed between two and three liters (about 0.5–0.8 gallons) of gasoline an hour while idling, and it was estimated that Americans wasted up to 150,000 barrels (24,000 m3) of oil per day idling their engines in the lines at gas stations.
The amount of oil sold in the United States in 1979 was only 3.5 percent less than the record set for oil sold the previous year. A telephone poll of 1,600 American adults conducted by the Associated Press and NBC News and released in early May 1979 found that only 37 percent of Americans thought the energy shortages were real, nine percent were not sure, and 54 percent thought the energy shortages were a hoax.
Many politicians proposed gas rationing. One such proponent was Harry Hughes, Governor of Maryland, who proposed odd-even rationing (only people with an odd-numbered license plate could purchase gas on an odd-numbered day), as was used during the 1973 Oil Crisis. Several states implemented odd-even gas rationing, including California, Pennsylvania, New York, New Jersey, Oregon, and Texas. Coupons for gasoline rationing were printed but were never actually used during the 1979 crisis.
On July 15, 1979, President Carter outlined his plans to reduce oil imports and improve energy efficiency in his "Crisis of Confidence" speech (sometimes known as the "malaise" speech). In the speech, Carter encouraged citizens to do what they could to reduce their use of energy. He had already installed water tank heating solar panels on the roof of the White House and a wood-burning stove in the living quarters. However, the panels were removed in 1986, reportedly for roof maintenance, during the administration of his successor, Ronald Reagan.
A speech Carter gave in April 1977 argued the oil crisis was "the moral equivalent of war". In November 1979, Iranian revolutionaries seized the American Embassy, and Carter imposed an embargo on Iranian oil. In January 1980, he issued the Carter Doctrine, declaring: "An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States". Additionally, as part of his administration's efforts at deregulation, Carter proposed removing price controls that had been imposed by the Richard Nixon administration before the 1973 crisis. Carter agreed to remove price controls in phases. They were finally fully dismantled in 1981 under Reagan. Carter also said he would impose a windfall profit tax on oil companies. While the regulated price of domestic oil was kept to $6 a barrel, the world market price was $30.
In 1980, the U.S. government established the Synthetic Fuels Corporation to produce an alternative to imported fossil fuels.
When the price of West Texas Intermediate crude oil increased 250 percent between 1978 and 1980, the oil-producing areas of Texas, Oklahoma, Louisiana, Colorado, Wyoming, and Alaska began experiencing an economic boom and population inflows.
According to one study, individuals who were between the ages of 15 and 18 during the 1979 oil crisis were substantially less likely to use cars once they were in their mid-30s.
=== Other oil-consuming nations ===
In response to the high oil prices of the 1970s, industrial nations took steps to reduce their dependence on the Organization of Petroleum-Exporting Countries (OPEC) oil. Electric utilities worldwide switched from oil to coal, natural gas, or nuclear power. National governments initiated multibillion-dollar research programs to develop alternatives to oil and commercial exploration developed major non-OPEC oilfields in Siberia, Alaska, North Sea, and the Gulf of Mexico. By 1986, daily worldwide demand for oil dropped by 5 million barrels but, non-OPEC production rose by an even-larger amount. Consequently, OPEC's market share reduced from 50 percent in 1979 to 29 percent in 1985.
=== Automobile fuel economy ===
At the time, Detroit's "Big Three" automakers (Ford, Chrysler, GM) were marketing downsized full-sized automobiles like the Chevrolet Caprice, the Ford LTD Crown Victoria and the Dodge St. Regis which met the CAFE fuel economy mandates passed in 1978. Detroit's response to the growing popularity of imported compacts like the Toyota Corolla and the Volkswagen Rabbit was the Chevrolet Citation and the Ford Fairmont. Ford replaced the Ford Pinto with the Ford Escort and Chrysler, on the verge of bankruptcy, introduced the Dodge Aries K. GM was having unfavorable market reactions to the Citation and introduced the Chevrolet Corsica and Chevrolet Beretta in 1987 which sold better. GM also replaced the Chevrolet Monza, introducing the 1982 Chevrolet Cavalier which was better received. Ford experienced a similar market rejection of the Fairmont and introduced the front-wheel-drive Ford Tempo in 1984.
Detroit was not well prepared for the sudden rise in fuel prices, and imported brands (primarily the Asian models, which were mass-marketed and had a lower manufacturing cost as opposed to British and West German brands). Moreover, the rising value of the Deutsche mark and British pound resulted in the transition to the rise of Japanese manufacturers as they were able to export their product from Japan at a lower cost, resulting in profitable gains (despite accusations of price dumping), and were now more widely available in North America and developing a loyal customer base.
A year after the 1979 Iranian Revolution, Japanese manufacturers surpassed Detroit's production totals, becoming first in the world. Indeed, the share of Japanese cars in U.S. auto purchases rose from 9 percent in 1976 to 21 percent in 1980. Japanese exports would later displace the automotive market once dominated by lower-tier European manufacturers (Renault, Fiat, Opel, Peugeot, MG, Triumph, Citroen). Some would declare bankruptcy (e.g. Triumph, Simca) or withdraw from the U.S. market, especially in the wake of grey market automobiles or the inability of the vehicle to meet DOT requirements (from emission requirements to automotive lighting). Many imported brands utilized fuel-saving technologies such as fuel injection and multi-valve engines over the common use of carburetors. The overall fuel economy of cars in the United States increased from about 15 miles per US gallon (16 L/100 km; 18 mpg‑imp) in 1979 to 18 mpg‑US (13 L/100 km; 22 mpg‑imp) by 1985 and 20 mpg‑US (12 L/100 km; 24 mpg‑imp) by 1990. This was one factor leading to the subsequent 1980s oil glut.
== See also ==
Energy crisis
Energy diplomacy
Iran hostage crisis
1979 world oil market chronology
1980s oil glut
2000s energy crisis
Hubbert peak theory
Carless days in New Zealand
== References ==
== Further reading ==
Ammann, Daniel (2009). The King of Oil: The Secret Lives of Marc Rich. New York: St. Martin's Press. ISBN 978-0-312-57074-3.
Lesch, David W. 1979: The Year That Shaped The Modern Middle East (2001) excerpt
Odell, Peter R. Oil and gas: crises and controversies 1961-2000 (2001) online
Odell, Peter R. Oil and world power : background to the oil crisis (1974) online
Painter, David S. (2014) "Oil and geopolitics: The oil crises of the 1970s and the cold war." Historical Social Research/Historische Sozialforschung (2014): 186–208. online
Randall, Stephen J. United States foreign oil policy since World War I: For profits and security (Montreal: McGill-Queen's Press-MQUP, 2005).
Yergin, Daniel (1991). The Prize: The Epic Quest for Oil, Money, and Power. Simon & Schuster. ISBN 0-671-50248-4. | Wikipedia/1979_energy_crisis |
Energy harvesting (EH) – also known as power harvesting, energy scavenging, or ambient power – is the process by which energy is derived from external sources (e.g., solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, also known as ambient energy), then stored for use by small, wireless autonomous devices, like those used in wearable electronics, condition monitoring, and wireless sensor networks.
Energy harvesters usually provide a very small amount of power for low-energy electronics. While the input fuel to some large-scale energy generation costs resources (oil, coal, etc.), the energy source for energy harvesters is present as ambient background. For example, temperature gradients exist from the operation of a combustion engine and in urban areas, there is a large amount of electromagnetic energy in the environment due to radio and television broadcasting.
One of the first examples of ambient energy being used to produce electricity was the successful use of electromagnetic radiation (EMR) to generate the crystal radio.
The principles of energy harvesting from ambient EMR can be demonstrated with basic components.
== Operation ==
Energy harvesting devices converting ambient energy into electrical energy have attracted much interest in both the military and commercial sectors. Some systems convert motion, such as that of ocean waves, into electricity to be used by oceanographic monitoring sensors for autonomous operation. Future applications may include high-power output devices (or arrays of such devices) deployed at remote locations to serve as reliable power stations for large systems. Another application is in wearable electronics, where energy-harvesting devices can power or recharge cell phones, mobile computers, and radio communication equipment. All of these devices must be sufficiently robust to endure long-term exposure to hostile environments and have a broad range of dynamic sensitivity to exploit the entire spectrum of wave motions. In addition, one of the latest techniques to generate electric power from vibration waves is the utilization of Auxetic Boosters. This method falls under the category of piezoelectric-based vibration energy harvesting (PVEH), where the harvested electric energy can be directly used to power wireless sensors, monitoring cameras, and other Internet of Things (IoT) devices.
=== Accumulating energy ===
Energy can also be harvested to power small autonomous sensors such as those developed using MEMS technology. These systems are often very small and require little power, but their applications are limited by the reliance on battery power. Scavenging energy from ambient vibrations, wind, heat, or light could enable smart sensors to function indefinitely.
Typical power densities available from energy harvesting devices are highly dependent upon the specific application (affecting the generator's size) and the design itself of the harvesting generator. In general, for motion-powered devices, typical values are a few μW/cm3 for human body-powered applications and hundreds of μW/cm3 for generators powered by machinery. Most energy-scavenging devices for wearable electronics generate very little power.
=== Storage of power ===
In general, energy can be stored in a capacitor, super capacitor, or battery. Capacitors are used when the application needs to provide huge energy spikes. Batteries leak less energy and are therefore used when the device needs to provide a steady flow of energy. These aspects of the battery depend on the type that is used. A common type of battery that is used for this purpose is the lead acid or lithium-ion battery although older types such as nickel metal hydride are still widely used today. Compared to batteries, super capacitors have virtually unlimited charge-discharge cycles and can therefore operate forever, enabling a maintenance-free operation in IoT and wireless sensor devices.
=== Use of the power ===
Current interest in low-power energy harvesting is for independent sensor networks. In these applications, an energy harvesting scheme puts power stored into a capacitor then boosts/regulates it to a second storage capacitor or battery for use in the microprocessor or in the data transmission. The power is usually used in a sensor application and the data is stored or transmitted, possibly through a wireless method.
== Motivation ==
One of the main driving forces behind the search for new energy harvesting devices is the desire to power sensor networks and mobile devices without batteries that need external charging or service. Batteries have several limitations, such as limited lifespan, environmental impact, size, weight, and cost. Energy harvesting devices can provide an alternative or complementary source of power for applications that require low power consumption, such as remote sensing, wearable electronics, condition monitoring, and wireless sensor networks. Energy harvesting devices can also extend the battery life or enable batteryless operation of some applications.
Another motivation for energy harvesting is the potential to address the issue of climate change by reducing greenhouse gas emissions and fossil fuel consumption. Energy harvesting devices can utilize renewable and clean sources of energy that are abundant and ubiquitous in the environment, such as solar, thermal, wind, and kinetic energy. Energy harvesting devices can also reduce the need for power transmission and distribution systems that cause energy losses and environmental impacts. Energy harvesting devices can therefore contribute to the development of a more sustainable and resilient energy system.
Recent research in energy harvesting has led to the innovation of devices capable of powering themselves through user interactions. Notable examples include battery-free game boys and other toys, which showcase the potential of devices powered by the energy generated from user actions, such as pressing buttons or turning knobs. These studies highlight how energy harvested from interactions can not only power the devices themselves but also extend their operational autonomy, promoting the use of renewable energy sources and reducing reliance on traditional batteries.
== Energy sources ==
There are many small-scale energy sources that generally cannot be scaled up to industrial size in terms of comparable output to industrial size solar, wind or wave power:
Some wristwatches are powered by kinetic energy (called automatic watches) generated through movement of the arm when walking. The arm movement causes winding of the watch's mainspring. Other designs, like Seiko's Kinetic, use a loose internal permanent magnet to generate electricity.
Photovoltaics is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels composed of a number of cells containing a photovoltaic material. Photovoltaics have been scaled up to industrial size and large-scale solar farms now exist.
Thermoelectric generators (TEGs) consist of the junction of two dissimilar materials and the presence of a thermal gradient. High-voltage outputs are possible by connecting many junctions electrically in series and thermally in parallel. Typical performance is 100–300 μV/K per junction. These can be utilized to capture mWs of energy from industrial equipment, structures, and even the human body. They are typically coupled with heat sinks to improve temperature gradient.
Micro wind turbines are used to harvest kinetic energy readily available in the environment in the form of wind to fuel low-power electronic devices such as wireless sensor nodes. When air flows across the blades of the turbine, a net pressure difference is developed between the wind speeds above and below the blades. This will result in a lift force generated which in turn rotates the blades. Similar to photovoltaics, wind farms have been constructed on an industrial scale and are being used to generate substantial amounts of electrical energy.
Piezoelectric crystals or fibers generate a small voltage whenever they are mechanically deformed. Vibration from engines can stimulate piezoelectric materials, as can the heel of a shoe or the pushing of a button.
Special antennas can collect energy from stray radio waves. This can also be done with a Rectenna and theoretically at even higher frequency EM radiation with a Nantenna.
Power from keys pressed during use of a portable electronic device or remote controller, using magnet and coil or piezoelectric energy converters, may be used to help power the device.
Vibration energy harvesting, based on electromagnetic induction, uses a magnet and a copper coil in the most simple versions to generate a current that can be converted into electricity.
Electrically-charged humidity produces electricity in the Air-gen, a nanopore-based device invented by a group at the University of Massachusetts at Amherst led by Jun Yao.
=== Ambient-radiation sources ===
A possible source of energy comes from ubiquitous radio transmitters. Historically, either a large collection area or close proximity to the radiating wireless energy source is needed to get useful power levels from this source. The nantenna is one proposed development which would overcome this limitation by making use of the abundant natural radiation (such as solar radiation).
One idea is to deliberately broadcast RF energy to power and collect information from remote devices. This is now commonplace in passive radio-frequency identification (RFID) systems, but the Safety and US Federal Communications Commission (and equivalent bodies worldwide) limit the maximum power that can be transmitted this way to civilian use. This method has been used to power individual nodes in a wireless sensor network.
=== Fluid flow ===
Various turbine and non-turbine generator technologies can harvest airflow. Towered wind turbines and airborne wind energy systems (AWES) harness the flow of air. Multiple companies are developing these technologies, which can operate in low-light environments, such as HVAC ducts, and can be scaled and optimized for the energy requirements of specific applications.
The flow of blood can also be utilized to power devices. For example, a pacemaker developed at the University of Bern, uses blood flow to wind up a spring, which then drives an electrical micro-generator.
Water energy harvesting has seen advancements in design, such as generators with transistor-like architecture, achieving high energy conversion efficiency and power density.
=== Photovoltaic ===
Photovoltaic (PV) energy harvesting wireless technology offers significant advantages over wired or solely battery-powered sensor solutions: virtually inexhaustible sources of power with little or no adverse environmental effects. Indoor PV harvesting solutions have to date been powered by specially tuned amorphous silicon (aSi)a technology most used in Solar Calculators. In recent years new PV technologies have come to the forefront in Energy Harvesting such as Dye-Sensitized Solar Cells (DSSC). The dyes absorb light much like chlorophyll does in plants. Electrons released on impact escape to the layer of TiO2 and from there diffuse, through the electrolyte, as the dye can be tuned to the visible spectrum much higher power can be produced. At 200 lux a DSSC can provide over 10 μW per cm2.
=== Piezoelectric ===
The piezoelectric effect converts mechanical strain into electric current or voltage. This strain can come from many different sources. Human motion, low-frequency seismic vibrations, and acoustic noise are everyday examples. Except in rare instances the piezoelectric effect operates in AC requiring time-varying inputs at mechanical resonance to be efficient.
Most piezoelectric electricity sources produce power on the order of milliwatts, too small for system application, but enough for hand-held devices such as some commercially available self-winding wristwatches. One proposal is that they are used for micro-scale devices, such as in a device harvesting micro-hydraulic energy. In this device, the flow of pressurized hydraulic fluid drives a reciprocating piston supported by three piezoelectric elements which convert the pressure fluctuations into an alternating current.
As piezo energy harvesting has been investigated only since the late 1990s, it remains an emerging technology. Nevertheless, some interesting improvements were made with the self-powered electronic switch at INSA school of engineering, implemented by the spin-off Arveni. In 2006, the proof of concept of a battery-less wireless doorbell push button was created, and recently, a product showed that classical wireless wallswitch can be powered by a piezo harvester. Other industrial applications appeared between 2000 and 2005, to harvest energy from vibration and supply sensors for example, or to harvest energy from shock.
Piezoelectric systems can convert motion from the human body into electrical power. DARPA has funded efforts to harness energy from leg and arm motion, shoe impacts, and blood pressure for low level power to implantable or wearable sensors. The nanobrushes are another example of a piezoelectric energy harvester. They can be integrated into clothing. Multiple other nanostructures have been exploited to build an energy-harvesting device, for example, a single crystal PMN-PT nanobelt was fabricated and assembled into a piezoelectric energy harvester in 2016. Careful design is needed to minimise user discomfort. These energy harvesting sources by association affect the body. The Vibration Energy Scavenging Project is another project that is set up to try to scavenge electrical energy from environmental vibrations and movements. Microbelt can be used to gather electricity from respiration. Besides, as the vibration of motion from human comes in three directions, a single piezoelectric cantilever based omni-directional energy harvester is created by using 1:2 internal resonance. Finally, a millimeter-scale piezoelectric energy harvester has also already been created.
Piezo elements are being embedded in walkways to recover the "people energy" of footsteps. They can also be embedded in shoes to recover "walking energy". Researchers at MIT developed the first micro-scale piezoelectric energy harvester using thin film PZT in 2005. Arman Hajati and Sang-Gook Kim invented the Ultra Wide-Bandwidth micro-scale piezoelectric energy harvesting device by exploiting the nonlinear stiffness of a doubly clamped microelectromechanical systems (MEMSs) resonator. The stretching strain in a doubly clamped beam shows a nonlinear stiffness, which provides a passive feedback and results in amplitude-stiffened Duffing mode resonance. Typically, piezoelectric cantilevers are adopted for the above-mentioned energy harvesting system. One drawback is that the piezoelectric cantilever has gradient strain distribution, i.e., the piezoelectric transducer is not fully utilized. To address this issue, triangle shaped and L-shaped cantilever are proposed for uniform strain distribution.
In 2018, Soochow University researchers reported hybridizing a triboelectric nanogenerator and a silicon solar cell by sharing a mutual electrode. This device can collect solar energy or convert the mechanical energy of falling raindrops into electricity.
UK telecom company Orange UK created an energy harvesting T-shirt and boots. Other companies have also done the same.
=== Energy from smart roads and piezoelectricity ===
Brothers Pierre Curie and Jacques Curie gave the concept of piezoelectric effect in 1880. Piezoelectric effect converts mechanical strain into voltage or electric current and generates electric energy from motion, weight, vibration and temperature changes as shown in the figure.
Considering piezoelectric effect in thin film lead zirconate titanate
P
b
(
Z
r
,
T
i
)
O
3
{\displaystyle Pb(Zr,Ti)O_{3}}
PZT, microelectromechanical systems (MEMS) power generating device has been developed. During recent improvement in piezoelectric technology, Aqsa Abbasi ) differentiated two modes called
d
31
{\displaystyle d_{31}}
and
d
33
{\displaystyle d_{33}}
in vibration converters and re-designed to resonate at specific frequencies from an external vibration energy source, thereby creating electrical energy via the piezoelectric effect using electromechanical damped mass.
However, Aqsa further developed beam-structured electrostatic devices that are more difficult to fabricate than PZT MEMS devices versus a similar because general silicon processing involves many more mask steps that do not require PZT film. Piezoelectric
d
31
{\displaystyle d_{31}}
type sensors and actuators have a cantilever beam structure that consists of a membrane bottom electrode, film, piezoelectric film, and top electrode. More than (3~5 masks) mask steps are required for patterning of each layer while have very low induced voltage. Pyroelectric crystals that have a unique polar axis and have spontaneous polarization, along which the spontaneous polarization exists. These are the crystals of classes 6mm, 4mm, mm2, 6, 4, 3m, 3,2, m. The special polar axis—crystallophysical axis X3 – coincides with the axes L6,L4, L3, and L2 of the crystals or lies in the unique straight plane P (class "m"). Consequently, the electric centers of positive and negative charges are displaced of an elementary cell from equilibrium positions, i.e., the spontaneous polarization of the crystal changes. Therefore, all considered crystals have spontaneous polarization
P
s
=
P
3
{\displaystyle Ps=P3}
. Since
piezoelectric effect in pyroelectric crystals arises as a result of changes in their spontaneous polarization under external effects (electric fields, mechanical stresses). As a result of displacement, Aqsa Abbasi introduced change in the components
Δ
P
s
{\displaystyle \Delta P_{s}}
along all three axes
Δ
P
s
=
(
Δ
P
1
,
Δ
P
2
,
Δ
P
3
)
{\displaystyle \Delta P_{s}=(\Delta P_{1},\Delta P_{2},\Delta P_{3})}
. Suppose that
Δ
P
s
=
(
Δ
P
1
,
Δ
P
2
,
Δ
P
3
)
{\displaystyle \Delta P_{s}=(\Delta P_{1},\Delta P_{2},\Delta P_{3})}
is proportional to the mechanical stresses causing in a first approximation, which results
Δ
P
i
=
d
i
k
l
T
k
l
{\displaystyle \Delta P_{i}=diklTkl}
where Tkl represents the mechanical stress and dikl represents the piezoelectric modules.
PZT thin films have attracted attention for applications such as force sensors, accelerometers, gyroscopes actuators, tunable optics, micro pumps, ferroelectric RAM, display systems and smart roads, when energy sources are limited, energy harvesting plays an important role in the environment. Smart roads have the potential to play an important role in power generation. Embedding piezoelectric material in the road can convert pressure exerted by moving vehicles into voltage and current.
=== Smart transportation intelligent system ===
Piezoelectric sensors are most useful in smart-road technologies that can be used to create systems that are intelligent and improve productivity in the long run. Imagine highways that alert motorists of a traffic jam before it forms. Or bridges that report when they are at risk of collapse, or an electric grid that fixes itself when blackouts hit. For many decades, scientists and experts have argued that the best way to fight congestion is intelligent transportation systems, such as roadside sensors to measure traffic and synchronized traffic lights to control the flow of vehicles. But the spread of these technologies has been limited by cost. There are also some other smart-technology shovel ready projects which could be deployed fairly quickly, but most of the technologies are still at the development stage and might not be practically available for five years or more.
=== Pyroelectric ===
The pyroelectric effect converts a temperature change into electric current or voltage. It is analogous to the piezoelectric effect, which is another type of ferroelectric behavior. Pyroelectricity requires time-varying inputs and suffers from small power outputs in energy harvesting applications due to its low operating frequencies. However, one key advantage of pyroelectrics over thermoelectrics is that many pyroelectric materials are stable up to 1200 °C or higher, enabling energy harvesting from high temperature sources and thus increasing thermodynamic efficiency.
One way to directly convert waste heat into electricity is by executing the Olsen cycle on pyroelectric materials. The Olsen cycle consists of two isothermal and two isoelectric field processes in the electric displacement-electric field (D-E) diagram. The principle of the Olsen cycle is to charge a capacitor via cooling under low electric field and to discharge it under heating at higher electric field. Several pyroelectric converters have been developed to implement the Olsen cycle using conduction, convection, or radiation. It has also been established theoretically that pyroelectric conversion based on heat regeneration using an oscillating working fluid and the Olsen cycle can reach Carnot efficiency between a hot and a cold thermal reservoir. Moreover, recent studies have established polyvinylidene fluoride trifluoroethylene [P(VDF-TrFE)] polymers and lead lanthanum zirconate titanate (PLZT) ceramics as promising pyroelectric materials to use in energy converters due to their large energy densities generated at low temperatures. Additionally, a pyroelectric scavenging device that does not require time-varying inputs was recently introduced. The energy-harvesting device uses the edge-depolarizing electric field of a heated pyroelectric to convert heat energy into mechanical energy instead of drawing electric current off two plates attached to the crystal-faces.
=== Thermoelectrics ===
In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two dissimilar conductors produces a voltage. At the heart of the thermoelectric effect is the fact that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler. The heat absorbed or produced is proportional to the current, and the proportionality constant is known as the Peltier coefficient. Today, due to knowledge of the Seebeck and Peltier effects, thermoelectric materials can be used as heaters, coolers and generators (TEGs).
Ideal thermoelectric materials have a high Seebeck coefficient, high electrical conductivity, and low thermal conductivity. Low thermal conductivity is necessary to maintain a high thermal gradient at the junction. Standard thermoelectric modules manufactured today consist of P- and N-doped bismuth-telluride semiconductors sandwiched between two metallized ceramic plates. The ceramic plates add rigidity and electrical insulation to the system. The semiconductors are connected electrically in series and thermally in parallel.
Miniature thermocouples have been developed that convert body heat into electricity and generate 40 μ W at 3 V with a 5-degree temperature gradient, while on the other end of the scale, large thermocouples are used in nuclear RTG batteries.
Practical examples are the finger-heartratemeter by the Holst Centre and the thermogenerators by the Fraunhofer-Gesellschaft.
Advantages to thermoelectrics:
No moving parts allow continuous operation for many years.
Thermoelectrics contain no materials that must be replenished.
Heating and cooling can be reversed.
One downside to thermoelectric energy conversion is low efficiency (currently less than 10%). The development of materials that are able to operate in higher temperature gradients, and that can conduct electricity well without also conducting heat (something that was until recently thought impossible ), will result in increased efficiency.
Future work in thermoelectrics could be to convert wasted heat, such as in automobile engine combustion, into electricity.
=== Electrostatic (capacitive) ===
This type of harvesting is based on the changing capacitance of vibration-dependent capacitors. Vibrations separate the plates of a charged variable capacitor, and mechanical energy is converted into electrical energy.
Electrostatic energy harvesters need a polarization source to work and to convert mechanical energy from vibrations into electricity. The polarization source should be in the order of some hundreds of volts; this greatly complicates the power management circuit. Another solution consists in using electrets, that are electrically charged dielectrics able to keep the polarization on the capacitor for years.
It's possible to adapt structures from classical electrostatic induction generators, which also extract energy from variable capacitances, for this purpose. The resulting devices are self-biasing, and can directly charge batteries, or can produce exponentially growing voltages on storage capacitors, from which energy can be periodically extracted by DC/DC converters.
=== Magnetic induction ===
Magnetic induction refers to the production of an electromotive force (i.e., voltage) in a changing magnetic field. This changing magnetic field can be created by motion, either rotation (i.e. Wiegand effect and Wiegand sensors) or linear movement (i.e. vibration).
Magnets wobbling on a cantilever are sensitive to even small vibrations and generate microcurrents by moving relative to conductors due to Faraday's law of induction. By developing a miniature device of this kind in 2007, a team from the University of Southampton made possible the planting of such a device in environments that preclude having any electrical connection to the outside world. Sensors in inaccessible places can now generate their own power and transmit data to outside receivers.
One of the major limitations of the magnetic vibration energy harvester developed at University of Southampton is the size of the generator, in this case approximately one cubic centimeter, which is much too large to integrate into today's mobile technologies. The complete generator including circuitry is a massive 4 cm by 4 cm by 1 cm nearly the same size as some mobile devices such as the iPod nano. Further reductions in the dimensions are possible through the integration of new and more flexible materials as the cantilever beam component. In 2012, a group at Northwestern University developed a vibration-powered generator out of polymer in the form of a spring. This device was able to target the same frequencies as the University of Southampton groups silicon based device but with one third the size of the beam component.
A new approach to magnetic induction based energy harvesting has also been proposed by using ferrofluids. The journal article, "Electromagnetic ferrofluid-based energy harvester", discusses the use of ferrofluids to harvest low frequency vibrational energy at 2.2 Hz with a power output of ~80 mW per g.
Quite recently, the change in domain wall pattern with the application of stress has been proposed as a method to harvest energy using magnetic induction. In this study, the authors have shown that the applied stress can change the domain pattern in microwires. Ambient vibrations can cause stress in microwires, which can induce a change in domain pattern and hence change the induction. Power, of the order of uW/cm2 has been reported.
Commercially successful vibration energy harvesters based on magnetic induction are still relatively few in number. Examples include products developed by Swedish company ReVibe Energy, a technology spin-out from Saab Group. Another example is the products developed from the early University of Southampton prototypes by Perpetuum. These have to be sufficiently large to generate the power required by wireless sensor nodes (WSN) but in M2M applications this is not normally an issue. These harvesters are now being supplied in large volumes to power WSNs made by companies such as GE and Emerson and also for train bearing monitoring systems made by Perpetuum.
Overhead powerline sensors can use magnetic induction to harvest energy directly from the conductor they are monitoring.
=== Blood sugar ===
Another way of energy harvesting is through the oxidation of blood sugars. These energy harvesters are called biobatteries. They could be used to power implanted electronic devices (e.g., pacemakers, implanted biosensors for diabetics, implanted active RFID devices, etc.). At present, the Minteer Group of Saint Louis University has created enzymes that could be used to generate power from blood sugars. However, the enzymes would still need to be replaced after a few years. In 2012, a pacemaker was powered by implantable biofuel cells at Clarkson University under the leadership of Dr. Evgeny Katz.
=== Tree-based ===
Tree metabolic energy harvesting is a type of bio-energy harvesting. Voltree has developed a method for harvesting energy from trees. These energy harvesters are being used to power remote sensors and mesh networks as the basis for a long term deployment system to monitor forest fires and weather in the forest. According to Voltree's website, the useful life of such a device should be limited only by the lifetime of the tree to which it is attached. A small test network was recently deployed in a US National Park forest.
Other sources of energy from trees include capturing the physical movement of the tree in a generator. Theoretical analysis of this source of energy shows some promise in powering small electronic devices. A practical device based on this theory has been built and successfully powered a sensor node for a year.
=== Metamaterial ===
A metamaterial-based device wirelessly converts a 900 MHz microwave signal to 7.3 volts of direct current (greater than that of a USB device). The device can be tuned to harvest other signals including Wi-Fi signals, satellite signals, or even sound signals. The experimental device used a series of five fiberglass and copper conductors. Conversion efficiency reached 37 percent. When traditional antennas are close to each other in space they interfere with each other. But since RF power goes down by the cube of the distance, the amount of power is very very small. While the claim of 7.3 volts is grand, the measurement is for an open circuit. Since the power is so low, there can be almost no current when any load is attached.
=== Atmospheric pressure changes ===
The pressure of the atmosphere changes naturally over time from temperature changes and weather patterns. Devices with a sealed chamber can use these pressure differences to extract energy. This has been used to provide power for mechanical clocks such as the Atmos clock.
=== Ocean energy ===
A relatively new concept of generating energy is to generate energy from oceans. Large masses of waters are present on the planet which carry with them great amounts of energy. The energy in this case can be generated by tidal streams, ocean waves, difference in salinity and also difference in temperature. As of 2018, efforts are underway to harvest energy this way. United States Navy recently was able to generate electricity using difference in temperatures present in the ocean.
One method to use the temperature difference across different levels of the thermocline in the ocean is by using a thermal energy harvester that is equipped with a material that changes phase while in different temperatures regions. This is typically a polymer-based material that can handle reversible heat treatments. When the material is changing phase, the energy differential is converted into mechanical energy. The materials used will need to be able to alter phases, from liquid to solid, depending on the position of the thermocline underwater. These phase change materials within thermal energy harvesting units would be an ideal way to recharge or power an unmanned underwater vehicle (UUV) being that it will rely on the warm and cold water already present in large bodies of water; minimizing the need for standard battery recharging. Capturing this energy would allow for longer-term missions since the need to be collected or return for charging can be eliminated. This is also a very environmentally friendly method of powering underwater vehicles. There are no emissions that come from utilizing a phase change fluid, and it will likely have a longer lifespan than that of a standard battery.
== Future directions ==
Electroactive polymers (EAPs) have been proposed for harvesting energy. These polymers have a large strain, elastic energy density, and high energy conversion efficiency. The total weight of systems based on EAPs (electroactive polymers) is proposed to be significantly lower than those based on piezoelectric materials.
Nanogenerators, such as the one made by Georgia Tech, could provide a new way for powering devices without batteries. As of 2008, it only generates some dozen nanowatts, which is too low for any practical application.
Noise has been the subject of a proposal by NiPS Laboratory in Italy to harvest wide spectrum low scale vibrations via a nonlinear dynamical mechanism that can improve harvester efficiency up to a factor 4 compared to traditional linear harvesters.
Combinations of different types
of energy harvesters can further reduce dependence on batteries, particularly in environments where the available ambient energy types change periodically. This type of complementary balanced energy harvesting has the potential to increase reliability of wireless sensor systems for structural health monitoring.
== See also ==
== References ==
== External links ==
Callendar, Hugh Longbourne (1911). "Thermoelectricity" . Encyclopædia Britannica. Vol. 26 (11th ed.). pp. 814–821. | Wikipedia/Energy_harvesting |
The Association of Energy Engineers (AEE) is a non-profit professional society founded in 1977 by Albert Thumann. The organization promotes scientific and education interests in the energy industry through its networking and outreach efforts and educational and professional certification programs.
== Certifications ==
Since 1981 the Association of Energy Engineers has certified more than 33,000 professionals, whose credentials are recognized by cities, states, countries and organizations around the world, as well as the U.S. Department of Energy and the U.S. Agency for International Development.
AEE offers the following certifications:
Certified Energy Manager (CEM)
Energy Manager in Training (EMIT)
Certified Energy Auditor (CEA)
Certified Energy Auditor – Master’s Level (CEAM)
Certified Measurement & Verification Professional (CMVP)
Certified Business Energy Professional (BEP)
Certified Building Energy Simulation Analyst (BESA)
Certified Building Commissioning Professional (CBCP)
Certified Building Commissioning Professional – Master’ Level (CBCPM)
Certified Energy Procurement Professional (CEP)
Certified GeoExchange Designer (CGD) (see International Ground Source Heat Pump Association)
Certified Lighting Efficiency Professional (CLEP)
Certified Power Quality Professional (CPQ)
Certified Carbon Reduction Manager (CRM)
Certified Carbon Auditor Professional (CAP)
Certified in the Use of RETScreen (CRU)
Certified Sustainable Development Professional (CSDP)
Distributed Generation Certified Professional (DGCP)
Existing Building Commissioning Professional (EBCP)
High Performance Building Professional (HPB)
Green Building Engineer (GBE)
Certified Residential Energy Auditor (REA)
Renewable Energy Professional (REP)
Energy Efficiency Practitioner (EEP)
Certified Performance Contracting & Funding Professional (PCF)
Government Operator of High Performance Buildings (GOHP)
Certified Demand-Side Management Professional (CDSM)
Certified Indoor Air Quality Professional (CIAQP)
Certified Industrial Energy Professional (CIEP)
Certified Water Efficiency Professional (CWEP)
== Conferences/shows ==
Each year, the Association of Energy Engineers (AEE) presents four conference and trade show events for energy and facility professionals. These events are held throughout the continental United States and Europe, and provide opportunities to find out more about the issues and marketplace developments that impact decisions, as well as to see emerging technologies first hand.
AEE's four annual trade show events are:
AEE East Energy Conference & Expo
AEE West Energy Conference & Expo
AEE World Energy Conference & Expo
AEE Europe Energy Conference & Expo
Conferences presented by AEE through 2018:
World Energy Engineering Congress (WEEC)
Globalcon Conference & Expo
West Coast Energy Management Congress
== Publications ==
The Association of Energy Engineers publishes three journals:
International Journal of Energy Management
International Journal of Strategic Energy and Environmental Planning
Alternative Energy and Distributed Generation Journal
AEE members also receive newsletters and reports on the energy industry.
== References ==
== External links ==
The Association of Energy Engineers website | Wikipedia/Association_of_Energy_Engineers |
Energy conversion efficiency (η) is the ratio between the useful output of an energy conversion machine and the input, in energy terms. The input, as well as the useful output may be chemical, electric power, mechanical work, light (radiation), or heat. The resulting value, η (eta), ranges between 0 and 1.
== Overview ==
Energy conversion efficiency depends on the usefulness of the output. All or part of the heat produced from burning a fuel may become rejected waste heat if, for example, work is the desired output from a thermodynamic cycle. Energy converter is an example of an energy transformation. For example, a light bulb falls into the categories energy converter.
η
=
P
o
u
t
P
i
n
{\displaystyle \eta ={\frac {P_{\mathrm {out} }}{P_{\mathrm {in} }}}}
Even though the definition includes the notion of usefulness, efficiency is considered a technical or physical term. Goal or mission oriented terms include effectiveness and efficacy.
Generally, energy conversion efficiency is a dimensionless number between 0 and 1.0, or 0% to 100%. Efficiencies cannot exceed 100%, which would result in a perpetual motion machine, which is impossible.
However, other effectiveness measures that can exceed 1.0 are used for refrigerators, heat pumps and other devices that move heat rather than convert it. It is not called efficiency, but the coefficient of performance, or COP. It is a ratio of useful heating or cooling provided relative to the work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5.
When talking about the efficiency of heat engines and power stations the convention should be stated, i.e., HHV (a.k.a. Gross Heating Value, etc.) or LCV (a.k.a. Net Heating value), and whether gross output (at the generator terminals) or net output (at the power station fence) are being considered. The two are separate but both must be stated. Failure to do so causes endless confusion.
Related, more specific terms include
Electrical efficiency, useful power output per electrical power consumed;
Mechanical efficiency, where one form of mechanical energy (e.g. potential energy of water) is converted to mechanical energy (work);
Thermal efficiency or Fuel efficiency, useful heat and/or work output per input energy such as the fuel consumed;
'Total efficiency', e.g., for cogeneration, useful electric power and heat output per fuel energy consumed. Same as the thermal efficiency.
Luminous efficiency, that portion of the emitted electromagnetic radiation is usable for human vision.
== Chemical conversion efficiency ==
The change of Gibbs energy of a defined chemical transformation at a particular temperature is the minimum theoretical quantity of energy required to make that change occur (if the change in Gibbs energy between reactants and products is positive) or the maximum theoretical energy that might be obtained from that change (if the change in Gibbs energy between reactants and products is negative). The energy efficiency of a process involving chemical change may be expressed relative to these theoretical minima or maxima.The difference between the change of enthalpy and the change of Gibbs energy of a chemical transformation at a particular temperature indicates the heat input required or the heat removal (cooling) required to maintain that temperature.
A fuel cell may be considered to be the reverse of electrolysis. For example, an ideal fuel cell operating at a temperature of 25 °C having gaseous hydrogen and gaseous oxygen as inputs and liquid water as the output could produce a theoretical maximum amount of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water produced and would require 48.701 kJ (0.01353 kWh) per gram mol of water produced of heat energy to be removed from the cell to maintain that temperature.
An ideal electrolysis unit operating at a temperature of 25 °C having liquid water as the input and gaseous hydrogen and gaseous oxygen as products would require a theoretical minimum input of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water consumed and would require 48.701 kJ (0.01353 kWh) per gram mol of water consumed of heat energy to be added to the unit to maintain that temperature. It would operate at a cell voltage of 1.24 V.
For a water electrolysis unit operating at a constant temperature of 25 °C without the input of any additional heat energy, electrical energy would have to be supplied at a rate equivalent of the enthalpy (heat) of reaction or 285.830 kJ (0.07940 kWh) per gram mol of water consumed. It would operate at a cell voltage of 1.48 V. The electrical energy input of this cell is 1.20 times greater than the theoretical minimum so the energy efficiency is 0.83 compared to the ideal cell.
A water electrolysis unit operating with a higher voltage that 1.48 V and at a temperature of 25 °C would have to have heat energy removed in order to maintain a constant temperature and the energy efficiency would be less than 0.83.
The large entropy difference between liquid water and gaseous hydrogen plus gaseous oxygen accounts for the significant difference between the Gibbs energy of reaction and the enthalpy (heat) of reaction.
== Fuel heating values and efficiency ==
In Europe the usable energy content of a fuel is typically calculated using the lower heating value (LHV) of that fuel, the definition of which assumes that the water vapor produced during fuel combustion (oxidation) remains gaseous, and is not condensed to liquid water so the latent heat of vaporization of that water is not usable. Using the LHV, a condensing boiler can achieve a "heating efficiency" in excess of 100% (this does not violate the first law of thermodynamics as long as the LHV convention is understood, but does cause confusion). This is because the apparatus recovers part of the heat of vaporization, which is not included in the definition of the lower heating value of a fuel. In the U.S. and elsewhere, the higher heating value (HHV) is used, which includes the latent heat for condensing the water vapor, and thus the thermodynamic maximum of 100% efficiency cannot be exceeded.
== Wall-plug efficiency, luminous efficiency, and efficacy ==
In optical systems such as lighting and lasers, the energy conversion efficiency is often referred to as wall-plug efficiency. The wall-plug efficiency is the measure of output radiative-energy, in watts (joules per second), per total input electrical energy in watts. The output energy is usually measured in terms of absolute irradiance and the wall-plug efficiency is given as a percentage of the total input energy, with the inverse percentage representing the losses.
The wall-plug efficiency differs from the luminous efficiency in that wall-plug efficiency describes the direct output/input conversion of energy (the amount of work that can be performed) whereas luminous efficiency takes into account the human eye's varying sensitivity to different wavelengths (how well it can illuminate a space). Instead of using watts, the power of a light source to produce wavelengths proportional to human perception is measured in lumens. The human eye is most sensitive to wavelengths of 555 nanometers (greenish-yellow) but the sensitivity decreases dramatically to either side of this wavelength, following a Gaussian power-curve and dropping to zero sensitivity at the red and violet ends of the spectrum. Due to this the eye does not usually see all of the wavelengths emitted by a particular light-source, nor does it see all of the wavelengths within the visual spectrum equally. Yellow and green, for example, make up more than 50% of what the eye perceives as being white, even though in terms of radiant energy white-light is made from equal portions of all colors (i.e.: a 5 mW green laser appears brighter than a 5 mW red laser, yet the red laser stands-out better against a white background). Therefore, the radiant intensity of a light source may be much greater than its luminous intensity, meaning that the source emits more energy than the eye can use. Likewise, the lamp's wall-plug efficiency is usually greater than its luminous efficiency. The effectiveness of a light source to convert electrical energy into wavelengths of visible light, in proportion to the sensitivity of the human eye, is referred to as luminous efficacy, which is measured in units of lumens per watt (lm/w) of electrical input-energy.
Unlike efficacy (effectiveness), which is a unit of measurement, efficiency is a unitless number expressed as a percentage, requiring only that the input and output units be of the same type. The luminous efficiency of a light source is thus the percentage of luminous efficacy per theoretical maximum efficacy at a specific wavelength. The amount of energy carried by a photon of light is determined by its wavelength. In lumens, this energy is offset by the eye's sensitivity to the selected wavelengths. For example, a green laser pointer can have greater than 30 times the apparent brightness of a red pointer of the same power output. At 555 nm in wavelength, 1 watt of radiant energy is equivalent to 683 lumens, thus a monochromatic light source at this wavelength, with a luminous efficacy of 683 lm/w, would have a luminous efficiency of 100%. The theoretical-maximum efficacy lowers for wavelengths at either side of 555 nm. For example, low-pressure sodium lamps produce monochromatic light at 589 nm with a luminous efficacy of 200 lm/w, which is the highest of any lamp. The theoretical-maximum efficacy at that wavelength is 525 lm/w, so the lamp has a luminous efficiency of 38.1%. Because the lamp is monochromatic, the luminous efficiency nearly matches the wall-plug efficiency of < 40%.
Calculations for luminous efficiency become more complex for lamps that produce white light or a mixture of spectral lines. Fluorescent lamps have higher wall-plug efficiencies than low-pressure sodium lamps, but only have half the luminous efficacy of ~ 100 lm/w, thus the luminous efficiency of fluorescents is lower than sodium lamps. A xenon flashtube has a typical wall-plug efficiency of 50–70%, exceeding that of most other forms of lighting. Because the flashtube emits large amounts of infrared and ultraviolet radiation, only a portion of the output energy is used by the eye. The luminous efficacy is therefore typically around 50 lm/w. However, not all applications for lighting involve the human eye nor are restricted to visible wavelengths. For laser pumping, the efficacy is not related to the human eye so it is not called "luminous" efficacy, but rather simply "efficacy" as it relates to the absorption lines of the laser medium. Krypton flashtubes are often chosen for pumping Nd:YAG lasers, even though their wall-plug efficiency is typically only ~ 40%. Krypton's spectral lines better match the absorption lines of the neodymium-doped crystal, thus the efficacy of krypton for this purpose is much higher than xenon; able to produce up to twice the laser output for the same electrical input. All of these terms refer to the amount of energy and lumens as they exit the light source, disregarding any losses that might occur within the lighting fixture or subsequent output optics. Luminaire efficiency refers to the total lumen-output from the fixture per the lamp output.
With the exception of a few light sources, such as incandescent light bulbs, most light sources have multiple stages of energy conversion between the "wall plug" (electrical input point, which may include batteries, direct wiring, or other sources) and the final light-output, with each stage producing a loss. Low-pressure sodium lamps initially convert the electrical energy using an electrical ballast, to maintain the proper current and voltage, but some energy is lost in the ballast. Similarly, fluorescent lamps also convert the electricity using a ballast (electronic efficiency). The electricity is then converted into light energy by the electrical arc (electrode efficiency and discharge efficiency). The light is then transferred to a fluorescent coating that only absorbs suitable wavelengths, with some losses of those wavelengths due to reflection off and transmission through the coating (transfer efficiency). The number of photons absorbed by the coating will not match the number then reemitted as fluorescence (quantum efficiency). Finally, due to the phenomenon of the Stokes shift, the re-emitted photons will have a longer wavelength (thus lower energy) than the absorbed photons (fluorescence efficiency). In very similar fashion, lasers also experience many stages of conversion between the wall plug and the output aperture. The terms "wall-plug efficiency" or "energy conversion efficiency" are therefore used to denote the overall efficiency of the energy-conversion device, deducting the losses from each stage, although this may exclude external components needed to operate some devices, such as coolant pumps.
== Example of energy conversion efficiency ==
== See also ==
== References ==
== External links ==
Does it make sense to switch to LED? | Wikipedia/Energy_efficiency_(physics) |
An energy system is a system primarily designed to supply energy-services to end-users.: 941 The intent behind energy systems is to minimise energy losses to a negligible level, as well as to ensure the efficient use of energy. The IPCC Fifth Assessment Report defines an energy system as "all components related to the production, conversion, delivery, and use of energy".: 1261
The first two definitions allow for demand-side measures, including daylighting, retrofitted building insulation, and passive solar building design, as well as socio-economic factors, such as aspects of energy demand management and remote work, while the third does not. Neither does the third account for the informal economy in traditional biomass that is significant in many developing countries.
The analysis of energy systems thus spans the disciplines of engineering and economics.: 1 Merging ideas from both areas to form a coherent description, particularly where macroeconomic dynamics are involved, is challenging.
The concept of an energy system is evolving as new regulations, technologies, and practices enter into service – for example, emissions trading, the development of smart grids, and the greater use of energy demand management, respectively.
== Treatment ==
From a structural perspective, an energy system is like any system and is made up of a set of interacting component parts, located within an environment. These components derive from ideas found in engineering and economics. Taking a process view, an energy system "consists of an integrated set of technical and economic activities operating within a complex societal framework".: 423 The identification of the components and behaviors of an energy system depends on the circumstances, the purpose of the analysis, and the questions under investigation. The concept of an energy system is therefore an abstraction which usually precedes some form of computer-based investigation, such as the construction and use of a suitable energy model.
Viewed in engineering terms, an energy system lends itself to representation as a flow network: the vertices map to engineering components like power stations and pipelines and the edges map to the interfaces between these components. This approach allows collections of similar or adjacent components to be aggregated and treated as one to simplify the model. Once described thus, flow network algorithms, such as minimum cost flow, may be applied. The components themselves can be treated as simple dynamical systems in their own right.
=== Economic modeling ===
Conversely, relatively pure economic modeling may adopt a sectoral approach with only limited engineering detail present. The sector and sub-sector categories published by the International Energy Agency are often used as a basis for this analysis. A 2009 study of the UK residential energy sector contrasts the use of the technology-rich Markal model with several UK sectoral housing stock models.
==== Data ====
International energy statistics are typically broken down by carrier, sector and sub-sector, and country. Energy carriers (aka energy products) are further classified as primary energy and secondary (or intermediate) energy and sometimes final (or end-use) energy. Published energy datasets are normally adjusted so that they are internally consistent, meaning that all energy stocks and flows must balance. The IEA regularly publishes energy statistics and energy balances with varying levels of detail and cost and also offers mid-term projections based on this data. The notion of an energy carrier, as used in energy economics, is distinct and different from the definition of energy used in physics.
=== Scopes ===
Energy systems can range in scope, from local, municipal, national, and regional, to global, depending on issues under investigation. Researchers may or may not include demand side measures within their definition of an energy system. The Intergovernmental Panel on Climate Change (IPCC) does so, for instance, but covers these measures in separate chapters on transport, buildings, industry, and agriculture.: 1261 : 516
Household consumption and investment decisions may also be included within the ambit of an energy system. Such considerations are not common because consumer behavior is difficult to characterize, but the trend is to include human factors in models. Household decision-taking may be represented using techniques from bounded rationality and agent-based behavior. The American Association for the Advancement of Science (AAAS) specifically advocates that "more attention should be paid to incorporating behavioral considerations other than price- and income-driven behavior into economic models [of the energy system]".: 6
== Energy-services ==
The concept of an energy-service is central, particularly when defining the purpose of an energy system:
It is important to realize that the use of energy is no end in itself but is always directed to satisfy human needs and desires. Energy services are the ends for which the energy system provides the means.: 941
Energy-services can be defined as amenities that are either furnished through energy consumption or could have been thus supplied.: 2 More explicitly:
Demand should, where possible, be defined in terms of energy-service provision, as characterized by an appropriate intensity – for example, air temperature in the case of space-heating or lux levels for illuminance. This approach facilitates a much greater set of potential responses to the question of supply, including the use of energetically-passive techniques – for instance, retrofitted insulation and daylighting.: 156
A consideration of energy-services per capita and how such services contribute to human welfare and individual quality of life is paramount to the debate on sustainable energy. People living in poor regions with low levels of energy-services consumption would clearly benefit from greater consumption, but the same is not generally true for those with high levels of consumption.
The notion of energy-services has given rise to energy-service companies (ESCo) who contract to provide energy-services to a client for an extended period. The ESCo is then free to choose the best means to do so, including investments in the thermal performance and HVAC equipment of the buildings in question.
== International standards ==
ISO 13600, ISO 13601, and ISO 13602 form a set of international standards covering technical energy systems (TES). Although withdrawn prior to 2016, these documents provide useful definitions and a framework for formalizing such systems. The standards depict an energy system broken down into supply and demand sectors, linked by the flow of tradable energy commodities (or energywares). Each sector has a set of inputs and outputs, some intentional and some harmful byproducts. Sectors may be further divided into subsectors, each fulfilling a dedicated purpose. The demand sector is ultimately present to supply energyware-based services to consumers (see energy-services).
== Energy system redesign and transformation ==
Energy system design includes the redesigning of energy systems to ensure sustainability of the system and its dependents and for meeting requirements of the Paris Agreement for climate change mitigation. Researchers are designing energy systems models and transformational pathways for renewable energy transitions towards 100% renewable energy, often in the form of peer-reviewed text documents created once by small teams of scientists and published in a journal.
Considerations include the system's intermittency management, air pollution, various risks (such as for human safety, environmental risks, cost risks and feasibility risks), stability for prevention of power outages (including grid dependence or grid-design), resource requirements (including water and rare minerals and recyclability of components), technology/development requirements, costs, feasibility, other affected systems (such as land-use that affects food systems), carbon emissions, available energy quantity and transition-concerning factors (including costs, labor-related issues and speed of deployment).
Energy system design can also consider energy consumption, such as in terms of absolute energy demand, waste and consumption reduction (e.g. via reduced energy-use, increased efficiency and flexible timing), process efficiency enhancement and waste heat recovery. A study noted significant potential for a type of energy systems modelling to "move beyond single disciplinary approaches towards a sophisticated integrated perspective".
== See also ==
Control volume – a concept from mechanics and thermodynamics
Electric power system – a network of electrical components used to generate, transfer, and use electric power
Energy development – the effort to provide societies with sufficient energy under the reduced social and environmental impact
Energy modeling – the process of building computer models of energy systems
Energy industry – the supply-side of the energy sector
Insular energy system - where an energy system is isolated from other nearby energy systems
Mathematical model – the representation of a system using mathematics and often solved using computers
Object-oriented programming – a computer programming paradigm suited to the representation of energy systems as networks
Network science – the study of complex networks
Open energy system databases – database projects which collect, clean, and republish energy-related datasets
Open energy system models – a review of energy system models that are also open source
Sankey diagram – used to show energy flows through a system
== Notes ==
== References ==
== External links == | Wikipedia/Energy_services |
Energy storage is the capture of energy produced at one time for use at a later time to reduce imbalances between energy demand and energy production. A device that stores energy is generally called an accumulator or battery. Energy comes in multiple forms including radiation, chemical, gravitational potential, electrical potential, electricity, elevated temperature, latent heat and kinetic. Energy storage involves converting energy from forms that are difficult to store to more conveniently or economically storable forms.
Some technologies provide short-term energy storage, while others can endure for much longer. Bulk energy storage is currently dominated by hydroelectric dams, both conventional as well as pumped. Grid energy storage is a collection of methods used for energy storage on a large scale within an electrical power grid.
Common examples of energy storage are the rechargeable battery, which stores chemical energy readily convertible to electricity to operate a mobile phone; the hydroelectric dam, which stores energy in a reservoir as gravitational potential energy; and ice storage tanks, which store ice frozen by cheaper energy at night to meet peak daytime demand for cooling. Fossil fuels such as coal and gasoline store ancient energy derived from sunlight by organisms that later died, became buried and over time were then converted into these fuels. Food (which is made by the same process as fossil fuels) is a form of energy stored in chemical form.
== History ==
In the 20th century grid, electrical power was largely generated by burning fossil fuel. When less power was required, less fuel was burned. Hydropower, a mechanical energy storage method, is the most widely adopted mechanical energy storage, and has been in use for centuries. Large hydropower dams have been energy storage sites for more than one hundred years. Concerns with air pollution, energy imports, and global warming have spawned the growth of renewable energy such as solar and wind power. Wind power is uncontrolled and may be generating at a time when no additional power is needed. Solar power varies with cloud cover and at best is only available during daylight hours, while demand often peaks after sunset (see duck curve). Interest in storing power from these intermittent sources grows as the renewable energy industry begins to generate a larger fraction of overall energy consumption. In 2023 BloombergNEF forecast total energy storage deployments to grow at a compound annual growth rate of 27 percent through 2030.
Off grid electrical use was a niche market in the 20th century, but in the 21st century, it has expanded. Portable devices are in use all over the world. Solar panels are now common in the rural settings worldwide. Access to electricity is now a question of economics and financial viability, and not solely on technical aspects. Electric vehicles are gradually replacing combustion-engine vehicles. However, powering long-distance transportation without burning fuel remains in development.
== Methods ==
=== Outline ===
The following list includes a variety of types of energy storage:
=== Mechanical ===
Energy can be stored in water pumped to a higher elevation using pumped storage methods or by moving solid matter to higher locations (gravity batteries). Other commercial mechanical methods include compressing air and flywheels that convert electric energy into internal energy or kinetic energy and then back again when electrical demand peaks.
==== Hydroelectricity ====
Hydroelectric dams with reservoirs can be operated to provide electricity at times of peak demand.
Water is stored in the reservoir during periods of low demand and released when demand is high.
The net effect is similar to pumped storage, but without the pumping loss.
While a hydroelectric dam does not directly store energy from other generating units, it behaves equivalently by lowering output in periods of excess electricity from other sources.
In this mode, dams are one of the most efficient forms of energy storage, because only the timing of its generation changes.
Hydroelectric turbines have a start-up time on the order of a few minutes.
==== Pumped hydro ====
Worldwide, pumped-storage hydroelectricity (PSH) is the largest-capacity form of active grid energy storage available, and, as of March 2012, the Electric Power Research Institute (EPRI) reports that PSH accounts for more than 99% of bulk storage capacity worldwide, representing around 127,000 MW. PSH energy efficiency varies in practice between 70% and 80%, with claims of up to 87%.
At times of low electrical demand, excess generation capacity is used to pump water from a lower source into a higher reservoir. When demand grows, water is released back into a lower reservoir (or waterway or body of water) through a turbine, generating electricity. Reversible turbine-generator assemblies act as both a pump and turbine (usually a Francis turbine design). Nearly all facilities use the height difference between two water bodies. Pure pumped-storage plants shift the water between reservoirs, while the "pump-back" approach is a combination of pumped storage and conventional hydroelectric plants that use natural stream-flow.
==== Compressed air ====
Compressed-air energy storage (CAES) uses surplus energy to compress air for subsequent electricity generation. Small-scale systems have long been used in such applications as propulsion of mine locomotives. The compressed air is stored in an underground reservoir, such as a salt dome.
Compressed-air energy storage (CAES) plants can bridge the gap between production volatility and load. CAES storage addresses the energy needs of consumers by effectively providing readily available energy to meet demand. Renewable energy sources like wind and solar energy vary. So at times when they provide little power, they need to be supplemented with other forms of energy to meet energy demand. Compressed-air energy storage plants can take in the surplus energy output of renewable energy sources during times of energy over-production. This stored energy can be used at a later time when demand for electricity increases or energy resource availability decreases.
Compression of air creates heat; the air is warmer after compression. Expansion requires heat. If no extra heat is added, the air will be much colder after expansion. If the heat generated during compression can be stored and used during expansion, efficiency improves considerably. A CAES system can deal with the heat in three ways. Air storage can be adiabatic, diabatic, or isothermal. Another approach uses compressed air to power vehicles.
==== Flywheel ====
Flywheel energy storage (FES) works by accelerating a rotor (a flywheel) to a very high speed, holding energy as rotational energy. When energy is added the rotational speed of the flywheel increases, and when energy is extracted, the speed declines, due to conservation of energy.
Most FES systems use electricity to accelerate and decelerate the flywheel, but devices that directly use mechanical energy are under consideration.
FES systems have rotors made of high strength carbon-fiber composites, suspended by magnetic bearings and spinning at speeds from 20,000 to over 50,000 revolutions per minute (rpm) in a vacuum enclosure. Such flywheels can reach maximum speed ("charge") in a matter of minutes. The flywheel system is connected to a combination electric motor/generator.
FES systems have relatively long lifetimes (lasting decades with little or no maintenance; full-cycle lifetimes quoted for flywheels range from in excess of 105, up to 107, cycles of use), high specific energy (100–130 W·h/kg, or 360–500 kJ/kg) and power density.
==== Solid mass gravitational ====
Changing the altitude of solid masses can store or release energy via an elevating system driven by an electric motor/generator. Studies suggest energy can begin to be released with as little as 1 second warning, making the method a useful supplemental feed into an electricity grid to balance load surges.
Efficiencies can be as high as 85% recovery of stored energy.
This can be achieved by siting the masses inside old vertical mine shafts or in specially constructed towers where the heavy weights are winched up to store energy and allowed a controlled descent to release it. At 2020 a prototype vertical store is being built in Edinburgh, Scotland
Potential energy storage or gravity energy storage was under active development in 2013 in association with the California Independent System Operator. It examined the movement of earth-filled hopper rail cars driven by electric locomotives from lower to higher elevations.
Other proposed methods include:-
using rails, cranes, or elevators to move weights up and down;
using high-altitude solar-powered balloon platforms supporting winches to raise and lower solid masses slung underneath them,
using winches supported by an ocean barge to take advantage of a 4 km (13,000 ft) elevation difference between the sea surface and the seabed,
=== Thermal ===
Thermal energy storage (TES) is the temporary storage or removal of heat.
==== Sensible heat thermal ====
Sensible heat storage take advantage of sensible heat in a material to store energy.
Seasonal thermal energy storage (STES) allows heat or cold to be used months after it was collected from waste energy or natural sources. The material can be stored in contained aquifers, clusters of boreholes in geological substrates such as sand or crystalline bedrock, in lined pits filled with gravel and water, or water-filled mines. Seasonal thermal energy storage (STES) projects often have paybacks in four to six years. An example is Drake Landing Solar Community in Canada, for which 97% of the year-round heat is provided by solar-thermal collectors on garage roofs, enabled by a borehole thermal energy store (BTES). In Braedstrup, Denmark, the community's solar district heating system also uses STES, at a temperature of 65 °C (149 °F). A heat pump, which runs only while surplus wind power is available. It is used to raise the temperature to 80 °C (176 °F) for distribution. When wind energy is not available, a gas-fired boiler is used. Twenty percent of Braedstrup's heat is solar.
==== Latent heat thermal (LHTES) ====
Latent heat thermal energy storage systems work by transferring heat to or from a material to change its phase. A phase-change is the melting, solidifying, vaporizing or liquifying. Such a material is called a phase change material (PCM). Materials used in LHTESs often have a high latent heat so that at their specific temperature, the phase change absorbs a large amount of energy, much more than sensible heat.
A steam accumulator is a type of LHTES where the phase change is between liquid and gas and uses the latent heat of vaporization of water. Ice storage air conditioning systems use off-peak electricity to store cold by freezing water into ice. The stored cold in ice releases during melting process and can be used for cooling at peak hours.
==== Cryogenic thermal energy storage ====
Air can be liquefied by cooling using electricity and stored as a cryogen with existing technologies. The liquid air can then be expanded through a turbine and the energy recovered as electricity. The system was demonstrated at a pilot plant in the UK in 2012.
In 2019, Highview announced plans to build a 50 MW in the North of England and northern Vermont, with the proposed facility able to store five to eight hours of energy, for a 250–400 MWh storage capacity.
==== Carnot battery ====
Electrical energy can be stored thermally by resistive heating or heat pumps, and the stored heat can be converted back to electricity via Rankine cycle or Brayton cycle. This technology has been studied to retrofit coal-fired power plants into fossil-fuel free generation systems. Coal-fired boilers are replaced by high-temperature heat storage charged by excess electricity from renewable energy sources. In 2020, German Aerospace Center started to construct the world's first large-scale Carnot battery system, which has 1,000 MWh storage capacity.
=== Electrochemical ===
==== Rechargeable battery ====
A rechargeable battery comprises one or more electrochemical cells. It is known as a 'secondary cell' because its electrochemical reactions are electrically reversible. Rechargeable batteries come in many shapes and sizes, ranging from button cells to megawatt grid systems.
Rechargeable batteries have lower total cost of use and environmental impact than non-rechargeable (disposable) batteries. Some rechargeable battery types are available in the same form factors as disposables. Rechargeable batteries have higher initial cost but can be recharged very cheaply and used many times.
Common rechargeable battery chemistries include:
Lead–acid battery: Lead acid batteries hold the largest market share of electric storage products. A single cell produces about 2V when charged. In the charged state the metallic lead negative electrode and the lead sulfate positive electrode are immersed in a dilute sulfuric acid (H2SO4) electrolyte. In the discharge process electrons are pushed out of the cell as lead sulfate is formed at the negative electrode while the electrolyte is reduced to water.
Lead–acid battery technology has been developed extensively. Upkeep requires minimal labor and its cost is low. The battery's available energy capacity is subject to a quick discharge resulting in a low life span and low energy density.
Nickel–cadmium battery (NiCd): Uses nickel oxide hydroxide and metallic cadmium as electrodes. Cadmium is a toxic element, and was banned for most uses by the European Union in 2004. Nickel–cadmium batteries have been almost completely replaced by nickel–metal hydride (NiMH) batteries.
Nickel–metal hydride battery (NiMH): First commercial types were available in 1989. These are now a common consumer and industrial type. The battery has a hydrogen-absorbing alloy for the negative electrode instead of cadmium.
Lithium-ion battery: The choice in many consumer electronics and have one of the best energy-to-mass ratios and a very slow self-discharge when not in use.
Lithium-ion polymer battery: These batteries are light in weight and can be made in any shape desired.
Aluminium-sulfur battery with rock salt crystals as electrolyte: aluminium and sulfur are Earth-abundant materials and are much more cheaper than traditional Lithium.
===== Flow battery =====
A flow battery works by passing a solution over a membrane where ions are exchanged to charge or discharge the cell. Cell voltage is chemically determined by the Nernst equation and ranges, in practical applications, from 1.0 V to 2.2 V. Storage capacity depends on the volume of solution. A flow battery is technically akin both to a fuel cell and an electrochemical accumulator cell. Commercial applications are for long half-cycle storage such as backup grid power.
==== Supercapacitor ====
Supercapacitors, also called electric double-layer capacitors (EDLC) or ultracapacitors, are a family of electrochemical capacitors that do not have conventional solid dielectrics. Capacitance is determined by two storage principles, double-layer capacitance and pseudocapacitance.
Supercapacitors bridge the gap between conventional capacitors and rechargeable batteries. They store the most energy per unit volume or mass (energy density) among capacitors. They support up to 10,000 farads/1.2 Volt, up to 10,000 times that of electrolytic capacitors, but deliver or accept less than half as much power per unit time (power density).
While supercapacitors have specific energy and energy densities that are approximately 10% of batteries, their power density is generally 10 to 100 times greater. This results in much shorter charge/discharge cycles. Also, they tolerate many more charge-discharge cycles than batteries.
Supercapacitors have many applications, including:
Low supply current for memory backup in static random-access memory (SRAM)
Power for cars, buses, trains, cranes and elevators, including energy recovery from braking, short-term energy storage and burst-mode power delivery
=== Chemical ===
==== Power-to-gas ====
Power-to-gas is the conversion of electricity to a gaseous fuel such as hydrogen or methane. The three commercial methods use electricity to reduce water into hydrogen and oxygen by means of electrolysis.
In the first method, hydrogen is injected into the natural gas grid or is used for transportation. The second method is to combine the hydrogen with carbon dioxide to produce methane using a methanation reaction such as the Sabatier reaction, or biological methanation, resulting in an extra energy conversion loss of 8%. The methane may then be fed into the natural gas grid. The third method uses the output gas of a wood gas generator or a biogas plant, after the biogas upgrader is mixed with the hydrogen from the electrolyzer, to upgrade the quality of the biogas.
===== Hydrogen =====
The element hydrogen can be a form of stored energy. Hydrogen can produce electricity via a hydrogen fuel cell.
At penetrations below 20% of the grid demand, renewables do not severely change the economics; but beyond about 20% of the total demand, external storage becomes important. If these sources are used to make ionic hydrogen, they can be freely expanded. A 5-year community-based pilot program using wind turbines and hydrogen generators began in 2007 in the remote community of Ramea, Newfoundland and Labrador. A similar project began in 2004 on Utsira, a small Norwegian island.
Energy losses involved in the hydrogen storage cycle come from the electrolysis of water, liquification or compression of the hydrogen and conversion to electricity.
Hydrogen can also be produced from aluminum and water by stripping aluminum's naturally-occurring aluminum oxide barrier and introducing it to water. This method is beneficial because recycled aluminum cans can be used to generate hydrogen, however systems to harness this option have not been commercially developed and are much more complex than electrolysis systems. Common methods to strip the oxide layer include caustic catalysts such as sodium hydroxide and alloys with gallium, mercury and other metals.
Underground hydrogen storage is the practice of hydrogen storage in caverns, salt domes and depleted oil and gas fields. Large quantities of gaseous hydrogen have been stored in caverns by Imperial Chemical Industries for many years without any difficulties. The European Hyunder project indicated in 2013 that storage of wind and solar energy using underground hydrogen would require 85 caverns.
Powerpaste is a magnesium and hydrogen -based fluid gel that releases hydrogen when reacting with water. It was invented, patented and is being developed by the Fraunhofer Institute for Manufacturing Technology and Advanced Materials (IFAM) of the Fraunhofer-Gesellschaft. Powerpaste is made by combining magnesium powder with hydrogen to form magnesium hydride in a process conducted at 350 °C and five to six times atmospheric pressure. An ester and a metal salt are then added to make the finished product. Fraunhofer states that they are building a production plant slated to start production in 2021, which will produce 4 tons of Powerpaste annually. Fraunhofer has patented their invention in the United States and EU. Fraunhofer claims that Powerpaste is able to store hydrogen energy at 10 times the energy density of a lithium battery of a similar dimension and is safe and convenient for automotive situations.
===== Methane =====
Methane is the simplest hydrocarbon with the molecular formula CH4. Methane is more easily stored and transported than hydrogen. Storage and combustion infrastructure (pipelines, gasometers, power plants) are mature.
Synthetic natural gas (syngas or SNG) can be created in a multi-step process, starting with hydrogen and oxygen. Hydrogen is then reacted with carbon dioxide in a Sabatier process, producing methane and water. Methane can be stored and later used to produce electricity. The resulting water is recycled, reducing the need for water. In the electrolysis stage, oxygen is stored for methane combustion in a pure oxygen environment at an adjacent power plant, eliminating nitrogen oxides.
Methane combustion produces carbon dioxide (CO2) and water. The carbon dioxide can be recycled to boost the Sabatier process and water can be recycled for further electrolysis. Methane production, storage and combustion recycles the reaction products.
The CO2 has economic value as a component of an energy storage vector, not a cost as in carbon capture and storage.
==== Power-to-liquid ====
Power-to-liquid is similar to power to gas except that the hydrogen is converted into liquids such as methanol or ammonia. These are easier to handle than gases, and require fewer safety precautions than hydrogen. They can be used for transportation, including aircraft, but also for industrial purposes or in the power sector.
==== Biofuels ====
Various biofuels such as biodiesel, vegetable oil, alcohol fuels, or biomass can replace fossil fuels. Various chemical processes can convert the carbon and hydrogen in coal, natural gas, plant and animal biomass and organic wastes into short hydrocarbons suitable as replacements for existing hydrocarbon fuels. Examples are Fischer–Tropsch diesel, methanol, dimethyl ether and syngas. This diesel source was used extensively in World War II in Germany, which faced limited access to crude oil supplies. South Africa produces most of the country's diesel from coal for similar reasons. A long term oil price above US$35/bbl may make such large scale synthetic liquid fuels economical.
===== Aluminum =====
Aluminum has been proposed as an energy store by a number of researchers. Its electrochemical equivalent (8.04 Ah/cm3) is nearly four times greater than that of lithium (2.06 Ah/cm3). Energy can be extracted from aluminum by reacting it with water to generate hydrogen. However, it must first be stripped of its natural oxide layer, a process which requires pulverization, chemical reactions with caustic substances, or alloys. The byproduct of the reaction to create hydrogen is aluminum oxide, which can be recycled into aluminum with the Hall–Héroult process, making the reaction theoretically renewable. If the Hall-Heroult Process is run using solar or wind power, aluminum could be used to store the energy produced at higher efficiency than direct solar electrolysis.
==== Boron, silicon, and zinc ====
Boron, silicon, and zinc have been proposed as energy storage solutions.
==== Other chemical ====
The organic compound norbornadiene converts to quadricyclane upon exposure to light, storing solar energy as the energy of chemical bonds. A working system has been developed in Sweden as a molecular solar thermal system.
=== Electrical methods ===
==== Capacitor ====
A capacitor (originally known as a 'condenser') is a passive two-terminal electrical component used to store energy electrostatically. Practical capacitors vary widely, but all contain at least two electrical conductors (plates) separated by a dielectric (i.e., insulator). A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries change. (This prevents loss of information in volatile memory.) Conventional capacitors provide less than 360 joules per kilogram, while a conventional alkaline battery has a density of 590 kJ/kg.
Capacitors store energy in an electrostatic field between their plates. Given a potential difference across the conductors (e.g., when a capacitor is attached across a battery), an electric field develops across the dielectric, causing positive charge (+Q) to collect on one plate and negative charge (-Q) to collect on the other plate. If a battery is attached to a capacitor for a sufficient amount of time, no current can flow through the capacitor. However, if an accelerating or alternating voltage is applied across the leads of the capacitor, a displacement current can flow. Besides capacitor plates, charge can also be stored in a dielectric layer.
Capacitance is greater given a narrower separation between conductors and when the conductors have a larger surface area. In practice, the dielectric between the plates emits a small amount of leakage current and has an electric field strength limit, known as the breakdown voltage. However, the effect of recovery of a dielectric after a high-voltage breakdown holds promise for a new generation of self-healing capacitors. The conductors and leads introduce undesired inductance and resistance.
Research is assessing the quantum effects of nanoscale capacitors for digital quantum batteries.
==== Superconducting magnetics ====
Superconducting magnetic energy storage (SMES) systems store energy in a magnetic field created by the flow of direct current in a superconducting coil that has been cooled to a temperature below its superconducting critical temperature. A typical SMES system includes a superconducting coil, power conditioning system and refrigerator. Once the superconducting coil is charged, the current does not decay and the magnetic energy can be stored indefinitely.
The stored energy can be released to the network by discharging the coil. The associated inverter/rectifier accounts for about 2–3% energy loss in each direction. SMES loses the least amount of electricity in the energy storage process compared to other methods of storing energy. SMES systems offer round-trip efficiency greater than 95%.
Due to the energy requirements of refrigeration and the cost of superconducting wire, SMES is used for short duration storage such as improving power quality. It also has applications in grid balancing.
== Applications ==
=== Mills ===
The classic application before the Industrial Revolution was the control of waterways to drive water mills for processing grain or powering machinery. Complex systems of reservoirs and dams were constructed to store and release water (and the potential energy it contained) when required.
=== Homes ===
Home energy storage is expected to become increasingly common given the growing importance of distributed generation of renewable energies (especially photovoltaics) and the important share of energy consumption in buildings. To exceed a self-sufficiency of 40% in a household equipped with photovoltaics, energy storage is needed. Multiple manufacturers produce rechargeable battery systems for storing energy, generally to hold surplus energy from home solar or wind generation. Today, for home energy storage, Li-ion batteries are preferable to lead-acid ones given their similar cost but much better performance.
Tesla Motors produces two models of the Tesla Powerwall. One is a 10 kWh weekly cycle version for backup applications and the other is a 7 kWh version for daily cycle applications. In 2016, a limited version of the Tesla Powerpack 2 cost $398(US)/kWh to store electricity worth 12.5 cents/kWh (US average grid price) making a positive return on investment doubtful unless electricity prices are higher than 30 cents/kWh.
RoseWater Energy produces two models of the "Energy & Storage System", the HUB 120 and SB20. Both versions provide 28.8 kWh of output, enabling it to run larger houses or light commercial premises, and protecting custom installations. The system provides five key elements into one system, including providing a clean 60 Hz Sine wave, zero transfer time, industrial-grade surge protection, renewable energy grid sell-back (optional), and battery backup.
Enphase Energy announced an integrated system that allows home users to store, monitor and manage electricity. The system stores 1.2 kWh of energy and 275W/500W power output.
Storing wind or solar energy using thermal energy storage though less flexible, is considerably cheaper than batteries. A simple 52-gallon electric water heater can store roughly 12 kWh of energy for supplementing hot water or space heating.
For purely financial purposes in areas where net metering is available, home generated electricity may be sold to the grid through a grid-tie inverter without the use of batteries for storage.
=== Grid electricity and power stations ===
==== Renewable energy ====
The largest source and the greatest store of renewable energy is provided by hydroelectric dams. A large reservoir behind a dam can store enough water to average the annual flow of a river between dry and wet seasons, and a very large reservoir can store enough water to average the flow of a river between dry and wet years. While a hydroelectric dam does not directly store energy from intermittent sources, it does balance the grid by lowering its output and retaining its water when power is generated by solar or wind. If wind or solar generation exceeds the region's hydroelectric capacity, then some additional source of energy is needed.
Many renewable energy sources (notably solar and wind) produce variable power. Storage systems can level out the imbalances between supply and demand that this causes. Electricity must be used as it is generated or converted immediately into storable forms.
The main method of electrical grid storage is pumped-storage hydroelectricity. Areas of the world such as Norway, Wales, Japan and the US have used elevated geographic features for reservoirs, using electrically powered pumps to fill them. When needed, the water passes through generators and converts the gravitational potential of the falling water into electricity. Pumped storage in Norway, which gets almost all its electricity from hydro, has currently a capacity of 1.4 GW but since the total installed capacity is nearly 32 GW and 75% of that is regulable, it can be expanded significantly.
Some forms of storage that produce electricity include pumped-storage hydroelectric dams, rechargeable batteries, thermal storage including molten salts which can efficiently store and release very large quantities of heat energy, and compressed air energy storage, flywheels, cryogenic systems and superconducting magnetic coils.
Surplus power can also be converted into methane (Sabatier process) with stockage in the natural gas network.
In 2011, the Bonneville Power Administration in the northwestern United States created an experimental program to absorb excess wind and hydro power generated at night or during stormy periods that are accompanied by high winds. Under central control, home appliances absorb surplus energy by heating ceramic bricks in special space heaters to hundreds of degrees and by boosting the temperature of modified hot water heater tanks. After charging, the appliances provide home heating and hot water as needed. The experimental system was created as a result of a severe 2010 storm that overproduced renewable energy to the extent that all conventional power sources were shut down, or in the case of a nuclear power plant, reduced to its lowest possible operating level, leaving a large area running almost completely on renewable energy.
Another advanced method used at the former Solar Two project in the United States and the Solar Tres Power Tower in Spain uses molten salt to store thermal energy captured from the sun and then convert it and dispatch it as electrical power. The system pumps molten salt through a tower or other special conduits to be heated by the sun. Insulated tanks store the solution. Electricity is produced by turning water to steam that is fed to turbines.
Since the early 21st century batteries have been applied to utility scale load-leveling and frequency regulation capabilities.
In vehicle-to-grid storage, electric vehicles that are plugged into the energy grid can deliver stored electrical energy from their batteries into the grid when needed.
=== Air conditioning ===
Thermal energy storage (TES) can be used for air conditioning. It is most widely used for cooling single large buildings and/or groups of smaller buildings. Commercial air conditioning systems are the biggest contributors to peak electrical loads. In 2009, thermal storage was used in over 3,300 buildings in over 35 countries. It works by chilling material at night and using the chilled material for cooling during the hotter daytime periods.
The most popular technique is ice storage, which requires less space than water and is cheaper than fuel cells or flywheels. In this application, a standard chiller runs at night to produce an ice pile. Water circulates through the pile during the day to chill water that would normally be the chiller's daytime output.
A partial storage system minimizes capital investment by running the chillers nearly 24 hours a day. At night, they produce ice for storage and during the day they chill water. Water circulating through the melting ice augments the production of chilled water. Such a system makes ice for 16 to 18 hours a day and melts ice for six hours a day. Capital expenditures are reduced because the chillers can be just 40% – 50% of the size needed for a conventional, no-storage design. Storage sufficient to store half a day's available heat is usually adequate.
A full storage system shuts off the chillers during peak load hours. Capital costs are higher, as such a system requires larger chillers and a larger ice storage system.
This ice is produced when electrical utility rates are lower. Off-peak cooling systems can lower energy costs. The U.S. Green Building Council has developed the Leadership in Energy and Environmental Design (LEED) program to encourage the design of reduced-environmental impact buildings. Off-peak cooling may help toward LEED Certification.
Thermal storage for heating is less common than for cooling. An example of thermal storage is storing solar heat to be used for heating at night.
Latent heat can also be stored in technical phase change materials (PCMs). These can be encapsulated in wall and ceiling panels, to moderate room temperatures.
=== Transport ===
Liquid hydrocarbon fuels are the most commonly used forms of energy storage for use in transportation, followed by a growing use of Battery Electric Vehicles and Hybrid Electric Vehicles. Other energy carriers such as hydrogen can be used to avoid producing greenhouse gases.
Public transport systems like trams and trolleybuses require electricity, but due to their variability in movement, a steady supply of electricity via renewable energy is challenging. Photovoltaic systems installed on the roofs of buildings can be used to power public transportation systems during periods in which there is increased demand for electricity and access to other forms of energy are not readily available. Upcoming transitions in the transportation system also include e.g. ferries and airplanes, where electric power supply is investigated as an interesting alternative.
=== Electronics ===
Capacitors are widely used in electronic circuits for blocking direct current while allowing alternating current to pass. In analog filter networks, they smooth the output of power supplies. In resonant circuits they tune radios to particular frequencies. In electric power transmission systems they stabilize voltage and power flow.
== Use cases ==
The United States Department of Energy International Energy Storage Database (IESDB), is a free-access database of energy storage projects and policies funded by the United States Department of Energy Office of Electricity and Sandia National Labs.
== Capacity ==
Storage capacity is the amount of energy extracted from an energy storage device or system; usually measured in joules or kilowatt-hours and their multiples, it may be given in number of hours of electricity production at power plant nameplate capacity; when storage is of primary type (i.e., thermal or pumped-water), output is sourced only with the power plant embedded storage system.
== Economics ==
The economics of energy storage strictly depends on the reserve service requested, and several uncertainty factors affect the profitability of energy storage. Therefore, not every storage method is technically and economically suitable for the storage of several MWh, and the optimal size of the energy storage is market and location dependent.
Moreover, ESS are affected by several risks, e.g.:
Techno-economic risks, which are related to the specific technology;
Market risks, which are the factors that affect the electricity supply system;
Regulation and policy risks.
Therefore, traditional techniques based on deterministic Discounted Cash Flow (DCF) for the investment appraisal are not fully adequate to evaluate these risks and uncertainties and the investor's flexibility to deal with them. Hence, the literature recommends to assess the value of risks and uncertainties through the Real Option Analysis (ROA), which is a valuable method in uncertain contexts.
The economic valuation of large-scale applications (including pumped hydro storage and compressed air) considers benefits including: curtailment avoidance, grid congestion avoidance, price arbitrage and carbon-free energy delivery. In one technical assessment by the Carnegie Mellon Electricity Industry Centre, economic goals could be met using batteries if their capital cost was $30 to $50 per kilowatt-hour.
A metric of energy efficiency of storage is energy storage on energy invested (ESOI), which is the amount of energy that can be stored by a technology, divided by the amount of energy required to build that technology. The higher the ESOI, the better the storage technology is energetically. For lithium-ion batteries this is around 10, and for lead acid batteries it is about 2. Other forms of storage such as pumped hydroelectric storage generally have higher ESOI, such as 210.
Pumped-storage hydroelectricity is by far the largest storage technology used globally. However, the usage of conventional pumped-hydro storage is limited because it requires terrain with elevation differences and also has a very high land use for relatively small power. In locations without suitable natural geography, underground pumped-hydro storage could also be used. High costs and limited life still make batteries a "weak substitute" for dispatchable power sources, and are unable to cover for variable renewable power gaps lasting for days, weeks or months. In grid models with high VRE share, the excessive cost of storage tends to dominate the costs of the whole grid — for example, in California alone 80% share of VRE would require 9.6 TWh of storage but 100% would require 36.3 TWh. As of 2018 the state only had 150 GWh of storage, primarily in pumped storage and a small fraction in batteries. According to another study, supplying 80% of US demand from VRE would require a smart grid covering the whole country or battery storage capable to supply the whole system for 12 hours, both at cost estimated at $2.5 trillion. Similarly, several studies have found that relying only on VRE and energy storage would cost about 30–50% more than a comparable system that combines VRE with nuclear plants or plants with carbon capture and storage instead of energy storage.
== Research ==
=== Germany ===
In 2013, the German government allocated €200M (approximately US$270M) for research, and another €50M to subsidize battery storage in residential rooftop solar panels, according to a representative of the German Energy Storage Association.
Siemens AG commissioned a production-research plant to open in 2015 at the Zentrum für Sonnenenergie und Wasserstoff (ZSW, the German Center for Solar Energy and Hydrogen Research in the State of Baden-Württemberg), a university/industry collaboration in Stuttgart, Ulm and Widderstall, staffed by approximately 350 scientists, researchers, engineers, and technicians. The plant develops new near-production manufacturing materials and processes (NPMM&P) using a computerized Supervisory Control and Data Acquisition (SCADA) system. It aims to enable the expansion of rechargeable battery production with increased quality and lower cost.
From 2023 onwards, a new project by the German Research Foundation focuses on molecular photoswitches to store solar thermal energy. The spokesperson of these so-called molecular solar thermal (MOST) systems is Prof. Dr. Hermann A. Wegner.
=== United States ===
In 2014, research and test centers opened to evaluate energy storage technologies. Among them was the Advanced Systems Test Laboratory at the University of Wisconsin at Madison in Wisconsin State, which partnered with battery manufacturer Johnson Controls. The laboratory was created as part of the university's newly opened Wisconsin Energy Institute. Their goals include the evaluation of state-of-the-art and next generation electric vehicle batteries, including their use as grid supplements.
The State of New York unveiled its New York Battery and Energy Storage Technology (NY-BEST) Test and Commercialization Center at Eastman Business Park in Rochester, New York, at a cost of $23 million for its almost 1,700 m2 laboratory. The center includes the Center for Future Energy Systems, a collaboration between Cornell University of Ithaca, New York and the Rensselaer Polytechnic Institute in Troy, New York. NY-BEST tests, validates and independently certifies diverse forms of energy storage intended for commercial use.
On September 27, 2017, Senators Al Franken of Minnesota and Martin Heinrich of New Mexico introduced Advancing Grid Storage Act (AGSA), which would devote more than $1 billion in research, technical assistance and grants to encourage energy storage in the United States.
In grid models with high VRE share, the excessive cost of storage tends to dominate the costs of the whole grid – for example, in California alone 80% share of VRE would require 9.6 TWh of storage but 100% would require 36.3 TWh. According to another study, supplying 80% of US demand from VRE would require a smart grid covering the whole country or battery storage capable to supply the whole system for 12 hours, both at cost estimated at $2.5 trillion.
=== United Kingdom ===
In the United Kingdom, some 14 industry and government agencies allied with seven British universities in May 2014 to create the SUPERGEN Energy Storage Hub in order to assist in the coordination of energy storage technology research and development.
== See also ==
== References ==
== Further reading ==
Journals and papers
Chen, Haisheng; Thang Ngoc Cong; Wei Yang; Chunqing Tan; Yongliang Li; Yulong Ding. Progress in electrical energy storage system: A critical review, Progress in Natural Science, accepted July 2, 2008, published in Vol. 19, 2009, pp. 291–312, doi: 10.1016/j.pnsc.2008.07.014. Sourced from the National Natural Science Foundation of China and the Chinese Academy of Sciences. Published by Elsevier and Science in China Press. Synopsis: a review of electrical energy storage technologies for stationary applications. Retrieved from ac.els-cdn.com on May 13, 2014. (PDF)
Corum, Lyn. The New Core Technology: Energy storage is part of the smart grid evolution, The Journal of Energy Efficiency and Reliability, December 31, 2009. Discusses: Anaheim Public Utilities Department, lithium ion energy storage, iCel Systems, Beacon Power, Electric Power Research Institute (EPRI), ICEL, Self Generation Incentive Program, ICE Energy, vanadium redox flow, lithium Ion, regenerative fuel cell, ZBB, VRB, lead acid, CAES, and Thermal Energy Storage. (PDF)
de Oliveira e Silva, G.; Hendrick, P. (2016). "Lead-acid batteries coupled with photovoltaics for increased electricity self-sufficiency in households". Applied Energy. 178: 856–867. Bibcode:2016ApEn..178..856D. doi:10.1016/j.apenergy.2016.06.003.
Sahoo, Subrat; Timmann, Pascal (2023). "Energy Storage Technologies for Modern Power Systems: A Detailed Analysis of Functionalities, Potentials, and Impacts" (PDF). IEEE Access. 11: 49689–49729. Bibcode:2023IEEEA..1149689S. doi:10.1109/ACCESS.2023.3274504. ISSN 2169-3536. Retrieved December 14, 2024.
Whittingham, M. Stanley. History, Evolution, and Future Status of Energy Storage, Proceedings of the IEEE, manuscript accepted February 20, 2012, date of publication April 16, 2012; date of current version May 10, 2012, published in Proceedings of the IEEE, Vol. 100, May 13, 2012, 0018–9219, pp. 1518–1534, doi: 10.1109/JPROC.2012.219017. Retrieved from ieeexplore.ieee.org May 13, 2014. Synopsis: A discussion of the important aspects of energy storage including emerging battery technologies and the importance of storage systems in key application areas, including electronic devices, transportation, and the utility grid. (PDF)
Books
GA Mansoori, N Enayati, LB Agyarko (2016), Energy: Sources, Utilization, Legislation, Sustainability, Illinois as Model State, World Sci. Pub. Co., ISBN 978-981-4704-00-7
Díaz-González, Franscisco (2016). Energy storage in power systems. United Kingdom: John Wiley & Sons. ISBN 9781118971321.
== External links ==
U.S. Dept of Energy – Energy Storage Systems Government research center on energy storage technology.
U.S. Dept of Energy – International Energy Storage Database Archived November 13, 2013, at the Wayback Machine The DOE International Energy Storage Database provides free, up-to-date information on grid-connected energy storage projects and relevant state and federal policies.
IEEE Special Issue on Massive Energy Storage
IEA-ECES – International Energy Agency – Energy Conservation through Energy Conservation programme.
Energy Information Administration Glossary
Energy Storage Project Regeneration. | Wikipedia/Energy_storage_system |
Science Buddies, formerly the Kenneth Lafferty Hess Family Charitable Foundation, is a non-profit organization that provides a website of free science project productivity tools and mentoring to support K-12 students, especially for science fairs. Founded in 2001 by engineer and high-tech businessman, Kenneth Hess, Science Buddies features STEM content and services to assist students and educators. Since its founding, it has expanded its original mission to provide teacher resources targeted for classroom and science fair use.
== Philosophy ==
Science Buddies mission is to help students to build their literacy in science and technology so they can become productive and engaged citizens in the 21st century.
The site has personalized learning tools, over 15,000 pages of scientist-developed subject matter (including experiments based on the latest academic research), and an online community of science professionals who volunteer to advise students.
Science Buddies also provides resources to support parents and teachers as they guide students seeking out and performing science projects. They attempt to provide a bridge between scientists, engineers, educators, and students, giving students access to current scientific research and simultaneously giving scientists a way to reach out to young people interested in their fields.
== About Science Buddies ==
Noticing how much fun his teenage daughter had participating in science fairs, but dismayed to discover a shortage of quality science fair help online, Ken Hess thought science fair "productivity tools" and mentoring would allow many more students to participate in science fairs and develop inspirational relationships with science role models. Over time, such a program would help students improve their science skills and literacy while inspiring them to consider careers in science and engineering. So, in early 2001, Ken Hess started a charity with a mission of developing online tools and support for students doing science fair projects.
In collaboration with high tech companies, government labs and agencies (like NOAA and NASA), universities, and other science education resources, Science Buddies offers scientist-authored tools, tips, and techniques. Doug Osheroff, (Nobel Prize winning physicist), and Bernard Harris (retired NASA astronaut) both serve on the Science Buddies scientific advisory board.
Science Buddies is a website, recommended by educational organizations such as the ALA and the SciLinks program of the National Science Teachers Association (NSTA). All resources and tools on the Science Buddies website are available free to students and teachers. Science Buddies uses an underwriting model of sponsorship (similar to PBS television) by displaying sponsor information.
== References ==
== External links ==
Science Buddies Stories at Scientific American
Science Buddies: Advancing Informal Science Education, Science 29 April 2011, Vol. 332 no. 6029 | Wikipedia/Science_Buddies |
Energy development is the field of activities focused on obtaining sources of energy from natural resources. These activities include the production of renewable, nuclear, and fossil fuel derived sources of energy, and for the recovery and reuse of energy that would otherwise be wasted. Energy conservation and efficiency measures reduce the demand for energy development, and can have benefits to society with improvements to environmental issues.
Societies use energy for transportation, manufacturing, illumination, heating and air conditioning, and communication, for industrial, commercial, agricultural and domestic purposes. Energy resources may be classified as primary resources, where the resource can be used in substantially its original form, or as secondary resources, where the energy source must be converted into a more conveniently usable form. Non-renewable resources are significantly depleted by human use, whereas renewable resources are produced by ongoing processes that can sustain indefinite human exploitation.
Thousands of people are employed in the energy industry. The conventional industry comprises the petroleum industry, the natural gas industry, the electrical power industry, and the nuclear industry. New energy industries include the renewable energy industry, comprising alternative and sustainable manufacture, distribution, and sale of alternative fuels.
== Classification of resources ==
Energy resources may be classified as primary resources, suitable for end use without conversion to another form, or secondary resources, where the usable form of energy required substantial conversion from a primary source. Examples of primary energy resources are wind power, solar power, wood fuel, fossil fuels such as coal, oil and natural gas, and uranium. Secondary resources are those such as electricity, hydrogen, or other synthetic fuels.
Another important classification is based on the time required to regenerate an energy resource. "Renewable resources" are those that recover their capacity in a time significant by human needs. Examples are hydroelectric power or wind power, when the natural phenomena that are the primary source of energy are ongoing and not depleted by human demands. Non-renewable resources are those that are significantly depleted by human usage and that will not recover their potential significantly during human lifetimes. An example of a non-renewable energy source is coal, which does not form naturally at a rate that would support human use.
== Fossil fuels ==
Fossil fuel (primary non-renewable fossil) sources burn coal or hydrocarbon fuels, which are the remains of the decomposition of plants and animals. There are three main types of fossil fuels: coal, petroleum, and natural gas. Another fossil fuel, liquefied petroleum gas (LPG), is principally derived from the production of natural gas. Heat from burning fossil fuel is used either directly for space heating and process heating, or converted to mechanical energy for vehicles, industrial processes, or electrical power generation. These fossil fuels are part of the carbon cycle and allow solar energy stored in the fuel to be released.
The use of fossil fuels in the 18th and 19th century set the stage for the Industrial Revolution.
Fossil fuels make up the bulk of the world's current primary energy sources. In 2005, 81% of the world's energy needs was met from fossil sources. The technology and infrastructure for the use of fossil fuels already exist. Liquid fuels derived from petroleum deliver much usable energy per unit of weight or volume, which is advantageous when compared with lower energy density sources such as batteries. Fossil fuels are currently economical for decentralized energy use.
Energy dependence on imported fossil fuels creates energy security risks for dependent countries. Oil dependence in particular has led to war, funding of radicals, monopolization, and socio-political instability.
Fossil fuels are non-renewable resources, which will eventually decline in production and become exhausted. While the processes that created fossil fuels are ongoing, fuels are consumed far more quickly than the natural rate of replenishment. Extracting fuels becomes increasingly costly as society consumes the most accessible fuel deposits. Extraction of fossil fuels results in environmental degradation, such as the strip mining and mountaintop removal for coal.
Fuel efficiency is a form of thermal efficiency, meaning the efficiency of a process that converts chemical potential energy contained in a carrier fuel into kinetic energy or work. The fuel economy is the energy efficiency of a particular vehicle, is given as a ratio of distance travelled per unit of fuel consumed. Weight-specific efficiency (efficiency per unit weight) may be stated for freight, and passenger-specific efficiency (vehicle efficiency) per passenger. The inefficient atmospheric combustion (burning) of fossil fuels in vehicles, buildings, and power plants contributes to urban heat islands.
Conventional production of oil peaked, conservatively, between 2007 and 2010. In 2010, it was estimated that an investment of $8 trillion in non-renewable resources would be required to maintain current levels of production for 25 years. In 2010, governments subsidized fossil fuels by an estimated $500 billion a year. Fossil fuels are also a source of greenhouse gas emissions, leading to concerns about global warming if consumption is not reduced.
The combustion of fossil fuels leads to the release of pollution into the atmosphere. The fossil fuels are mainly carbon compounds. During combustion, carbon dioxide is released, and also nitrogen oxides, soot and other fine particulates. The carbon dioxide is the main contributor to recent climate change.
Other emissions from fossil fuel power station include sulphur dioxide, carbon monoxide (CO), hydrocarbons, volatile organic compounds (VOC), mercury, arsenic, lead, cadmium, and other heavy metals including traces of uranium.
A typical coal plant generates billions of kilowatt hours of electrical power per year.
== Nuclear ==
=== Fission ===
Nuclear power is the use of nuclear fission to generate useful heat and electricity. Fission of uranium produces nearly all economically significant nuclear power. Radioisotope thermoelectric generators form a very small component of energy generation, mostly in specialized applications such as deep space vehicles.
Nuclear power plants, excluding naval reactors, provided about 5.7% of the world's energy and 13% of the world's electricity in 2012.
In 2013, the IAEA report that there are 437 operational nuclear power reactors, in 31 countries, although not every reactor is producing electricity. In addition, there are approximately 140 naval vessels using nuclear propulsion in operation, powered by some 180 reactors. As of 2013, attaining a net energy gain from sustained nuclear fusion reactions, excluding natural fusion power sources such as the Sun, remains an ongoing area of international physics and engineering research. More than 60 years after the first attempts, commercial fusion power production remains unlikely before 2050.
There is an ongoing debate about nuclear power. Proponents, such as the World Nuclear Association, the IAEA and Environmentalists for Nuclear Energy contend that nuclear power is a safe, sustainable energy source that reduces carbon emissions. Opponents contend that nuclear power poses many threats to people and the environment.
Nuclear power plant accidents include the Chernobyl disaster (1986), Fukushima Daiichi nuclear disaster (2011), and the Three Mile Island accident (1979). There have also been some nuclear submarine accidents. In terms of lives lost per unit of energy generated, analysis has determined that nuclear power has caused less fatalities per unit of energy generated than the other major sources of energy generation. Energy production from coal, petroleum, natural gas and hydropower has caused a greater number of fatalities per unit of energy generated due to air pollution and energy accident effects. However, the economic costs of nuclear power accidents is high, and meltdowns can take decades to clean up. The human costs of evacuations of affected populations and lost livelihoods is also significant.
Comparing Nuclear's latent cancer deaths, such as cancer with other energy sources immediate deaths per unit of energy generated(GWeyr). This study does not include fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident" classification, which would be an accident with more than 5 fatalities.
As of 2012, according to the IAEA, worldwide there were 68 civil nuclear power reactors under construction in 15 countries, approximately 28 of which in the People's Republic of China (PRC), with the most recent nuclear power reactor, as of May 2013, to be connected to the electrical grid, occurring on February 17, 2013, in Hongyanhe Nuclear Power Plant in the PRC. In the United States, two new Generation III reactors are under construction at Vogtle. U.S. nuclear industry officials expect five new reactors to enter service by 2020, all at existing plants. In 2013, four aging, uncompetitive, reactors were permanently closed.
Recent experiments in extraction of uranium use polymer ropes that are coated with a substance that selectively absorbs uranium from seawater. This process could make the considerable volume of uranium dissolved in seawater exploitable for energy production. Since ongoing geologic processes carry uranium to the sea in amounts comparable to the amount that would be extracted by this process, in a sense the sea-borne uranium becomes a sustainable resource.
Nuclear power is a low carbon power generation method of producing electricity, with an analysis of the literature on its total life cycle emission intensity finding that it is similar to renewable sources in a comparison of greenhouse gas (GHG) emissions per unit of energy generated. Since the 1970s, nuclear fuel has displaced about 64 gigatonnes of carbon dioxide equivalent (GtCO2-eq) greenhouse gases, that would have otherwise resulted from the burning of oil, coal or natural gas in fossil-fuel power stations.
==== Nuclear power phase-out and pull-backs ====
Japan's 2011 Fukushima Daiichi nuclear accident, which occurred in a reactor design from the 1960s, prompted a rethink of nuclear safety and nuclear energy policy in many countries. Germany decided to close all its reactors by 2022, and Italy has banned nuclear power. Following Fukushima, in 2011 the International Energy Agency halved its estimate of additional nuclear generating capacity to be built by 2035.
===== Fukushima =====
Following the 2011 Fukushima Daiichi nuclear disaster – the second worst nuclear incident, that displaced 50,000 households after radioactive material leaked into the air, soil and sea, and with subsequent radiation checks leading to bans on some shipments of vegetables and fish – a global public support survey by Ipsos (2011) for energy sources was published and nuclear fission was found to be the least popular
==== Fission economics ====
The economics of new nuclear power plants is a controversial subject, since there are diverging views on this topic, and multibillion-dollar investments ride on the choice of an energy source. Nuclear power plants typically have high capital costs for building the plant, but low direct fuel costs. In recent years there has been a slowdown of electricity demand growth and financing has become more difficult, which affects large projects such as nuclear reactors, with very large upfront costs and long project cycles which carry a large variety of risks. In Eastern Europe, a number of long-established projects are struggling to find finance, notably Belene in Bulgaria and the additional reactors at Cernavoda in Romania, and some potential backers have pulled out. Where cheap gas is available and its future supply relatively secure, this also poses a major problem for nuclear projects.
Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power plants were developed by state-owned or regulated utility monopolies where many of the risks associated with construction costs, operating performance, fuel price, and other factors were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants.
==== Costs ====
Costs are likely to go up for currently operating and new nuclear power plants, due to increased requirements for on-site spent fuel management and elevated design basis threats. While first of their kind designs, such as the EPRs under construction are behind schedule and over-budget, of the seven South Korean APR-1400s presently under construction worldwide, two are in S.Korea at the Hanul Nuclear Power Plant and four are at the largest nuclear station construction project in the world as of 2016, in the United Arab Emirates at the planned Barakah nuclear power plant. The first reactor, Barakah-1 is 85% completed and on schedule for grid-connection during 2017.
Two of the four EPRs under construction (in Finland and France) are significantly behind schedule and substantially over cost.
== Renewable sources ==
Renewable energy is generally defined as energy that comes from resources which are naturally replenished on a human timescale such as sunlight, wind, rain, tides, waves and geothermal heat. Renewable energy replaces conventional fuels in four distinct areas: electricity generation, hot water/space heating, motor fuels, and rural (off-grid) energy services.
Including traditional biomass usage, about 19% of global energy consumption is accounted for by renewable resources. Wind powered energy production is being turned to as a prominent renewable energy source, increasing global wind power capacity by 12% in 2021. While not the case for all countries, 58% of sample countries linked renewable energy consumption to have a positive impact on economic growth. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[76]
Unlike other energy sources, renewable energy sources are not as restricted by geography. Additionally deployment of renewable energy is resulting in economic benefits as well as combating climate change. Rural electrification has been researched on multiple sites and positive effects on commercial spending, appliance use, and general activities requiring electricity as energy. Renewable energy growth in at least 38 countries has been driven by the high electricity usage rates. International support for promoting renewable sources like solar and wind have continued grow.
While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development. To ensure human development continues sustainably, governments around the world are beginning to research potential ways to implement renewable sources into their countries and economies. For example, the UK Government’s Department for Energy and Climate Change 2050 Pathways created a mapping technique to educate the public on land competition between energy supply technologies. This tool provides users the ability to understand what the limitations and potential their surrounding land and country has in terms of energy production.
=== Hydroelectricity ===
Hydroelectricity is electric power generated by hydropower; the force of falling or flowing water. In 2015 hydropower generated 16.6% of the world's total electricity and 70% of all renewable electricity and was expected to increase about 3.1% each year for the following 25 years.
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.
The cost of hydroelectricity is relatively low, making it a competitive source of renewable electricity. The average cost of electricity from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour. Hydro is also a flexible source of electricity since plants can be ramped up and down very quickly to adapt to changing energy demands. However, damming interrupts the flow of rivers and can harm local ecosystems, and building large dams and reservoirs often involves displacing people and wildlife. Once a hydroelectric complex is constructed, the project produces no direct waste, and has a considerably lower output level of the greenhouse gas carbon dioxide than fossil fuel powered energy plants.
=== Wind ===
Wind power harnesses the power of the wind to propel the blades of wind turbines. These turbines cause the rotation of magnets, which creates electricity. Wind towers are usually built together on wind farms. There are offshore and onshore wind farms. Global wind power capacity has expanded rapidly to 336 GW in June 2014, and wind energy production was around 4% of total worldwide electricity usage, and growing rapidly.
Wind power is widely used in Europe, Asia, and the United States. Several countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, 14% in Ireland, and 9% in Germany in 2010.: 11 By 2011, at times over 50% of electricity in Germany and Spain came from wind and solar power. As of 2011, 83 countries around the world are using wind power on a commercial basis.: 11
Many of the world's largest onshore wind farms are located in the United States, China, and India. Most of the world's largest offshore wind farms are located in Denmark, Germany and the United Kingdom. The two largest offshore wind farm are currently the 630 MW London Array and Gwynt y Môr.
=== Solar ===
=== Biofuels ===
A biofuel is a fuel that contains energy from geologically recent carbon fixation. These fuels are produced from living organisms. Examples of this carbon fixation occur in plants and microalgae. These fuels are made by a biomass conversion (biomass refers to recently living organisms, most often referring to plants or plant-derived materials). This biomass can be converted to convenient energy containing substances in three different ways: thermal conversion, chemical conversion, and biochemical conversion. This biomass conversion can result in fuel in solid, liquid, or gas form. This new biomass can be used for biofuels. Biofuels have increased in popularity because of rising oil prices and the need for energy security.
Bioethanol is an alcohol made by fermentation, mostly from carbohydrates produced in sugar or starch crops such as corn or sugarcane. Cellulosic biomass, derived from non-food sources, such as trees and grasses, is also being developed as a feedstock for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the USA and in Brazil. Current plant design does not provide for converting the lignin portion of plant raw materials to fuel components by fermentation.
Biodiesel is made from vegetable oils and animal fats. Biodiesel can be used as a fuel for vehicles in its pure form, but it is usually used as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. However, research is underway on producing renewable fuels from decarboxylation
In 2010, worldwide biofuel production reached 105 billion liters (28 billion gallons US), up 17% from 2009, and biofuels provided 2.7% of the world's fuels for road transport, a contribution largely made up of ethanol and biodiesel. Global ethanol fuel production reached 86 billion liters (23 billion gallons US) in 2010, with the United States and Brazil as the world's top producers, accounting together for 90% of global production. The world's largest biodiesel producer is the European Union, accounting for 53% of all biodiesel production in 2010. As of 2011, mandates for blending biofuels exist in 31 countries at the national level and in 29 states or provinces.: 13–14 The International Energy Agency has a goal for biofuels to meet more than a quarter of world demand for transportation fuels by 2050 to reduce dependence on petroleum and coal.
=== Geothermal ===
Geothermal energy is thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. The geothermal energy of the Earth's crust originates from the original formation of the planet (20%) and from radioactive decay of minerals (80%). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots γη (ge), meaning earth, and θερμος (thermos), meaning hot.
Earth's internal heat is thermal energy generated from radioactive decay and continual heat loss from Earth's formation. Temperatures at the core-mantle boundary may reach over 4000 °C (7,200 °F). The high temperature and pressure in Earth's interior cause some rock to melt and solid mantle to behave plastically, resulting in portions of mantle convecting upward since it is lighter than the surrounding rock. Rock and water is heated in the crust, sometimes up to 370 °C (700 °F).
From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times, but it is now better known for electricity generation. Worldwide, 11,400 megawatts (MW) of geothermal power is online in 24 countries in 2012. An additional 28 gigawatts of direct geothermal heating capacity is installed for district heating, space heating, spas, industrial processes, desalination and agricultural applications in 2010.
Geothermal power is cost effective, reliable, sustainable, and environmentally friendly, but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have dramatically expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
The Earth's geothermal resources are theoretically more than adequate to supply humanity's energy needs, but only a very small fraction may be profitably exploited. Drilling and exploration for deep resources is very expensive. Forecasts for the future of geothermal power depend on assumptions about technology, energy prices, subsidies, and interest rates. Pilot programs like EWEB's customer opt in Green Power Program show that customers would be willing to pay a little more for a renewable energy source like geothermal. But as a result of government assisted research and industry experience, the cost of generating geothermal power has decreased by 25% over the past two decades. In 2001, geothermal energy cost between two and ten US cents per kWh.
=== Oceanic ===
Marine Renewable Energy (MRE) or marine power (also sometimes referred to as ocean energy, ocean power, or marine and hydrokinetic energy) refers to the energy carried by the mechanical energy of ocean waves, currents, and tides, shifts in salinity gradients, and ocean temperature differences. MRE has the potential to become a reliable and renewable energy source because of the cyclical nature of the oceans. The movement of water in the world's oceans creates a vast store of kinetic energy or energy in motion. This energy can be harnessed to generate electricity to power homes, transport, and industries.
The term marine energy encompasses both wave power, i.e. power from surface waves, and tidal power, i.e. obtained from the kinetic energy of large bodies of moving water. Offshore wind power is not a form of marine energy, as wind power is derived from the wind, even if the wind turbines are placed over water. The oceans have a tremendous amount of energy and are close to many if not most concentrated populations. Ocean energy has the potential to provide a substantial amount of new renewable energy around the world.
Marine energy technology is in its first stage of development. To be developed, MRE needs efficient methods of storing, transporting, and capturing ocean power, so it can be used where needed. Over the past year, countries around the world have started implementing market strategies for MRE to commercialize. Canada and China introduced incentives, such as feed-in tariffs (FiTs), which are above-market prices for MRE that allow investors and project developers a stable income. Other financial strategies consist of subsidies, grants, and funding from public-private partnerships (PPPs). China alone approved 100 ocean projects in 2019. Portugal and Spain recognize the potential of MRE in accelerating decarbonization, which is fundamental to meeting the goals of the Paris Agreement. Both countries are focusing on solar and offshore wind auctions to attract private investment, ensure cost-effectiveness, and accelerate MRE growth. Ireland sees MRE as a key component to reduce its carbon footprint. The Offshore Renewable Energy Development Plan (OREDP) supports the exploration and development of the country's significant offshore energy potential. Additionally, Ireland has implemented the Renewable Electricity Support Scheme (RESS) which includes auctions designed to provide financial support for communities, increase technology diversity, and guarantee energy security.
However, while research is increasing, there have been concerns associated with threats to marine mammals, habitats, and potential changes to ocean currents. MRE can be a renewable energy source for coastal communities helping their transition from fossil fuel, but researchers are calling for a better understanding of its environmental impacts. Because ocean-energy areas are often isolated from both fishing and sea traffic, these zones may provide shelter from humans and predators for some marine species. MRE devices can be an ideal home for many fish, crayfish, mollusks, and barnacles; and may also indirectly affect seabirds, and marine mammals because they feed on those species. Similarly, such areas may create an "artificial reef effect" by boosting biodiversity nearby. Noise pollution generated from the technology is limited, also causing fish and mammals living in the area of the installation to return. In the most recent State of Science Report about MRE, the authors claim that there is no evidence for fish, mammals, or seabirds to be injured by either collision, noise pollution, or the electromagnetic field. The uncertainty of its environmental impact comes from the low quantity of MRE devices in the ocean today where data is collected.
=== 100% renewable energy ===
The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. Renewable energy use has grown much faster than anyone anticipated. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Stephen W. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges."
Mark Z. Jacobson says producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Jacobson says that energy costs with a wind, solar, water system should be similar to today's energy costs.
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs ... Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly larger amounts of electricity than the total current or projected domestic demand." .
Critics of the "100% renewable energy" approach include Vaclav Smil and James E. Hansen. Smil and Hansen are concerned about the variable output of solar and wind power, but Amory Lovins argues that the electricity grid can cope, just as it routinely backs up nonworking coal-fired and nuclear plants with working ones.
Google spent $30 million on their "Renewable Energy Cheaper than Coal" project to develop renewable energy and stave off catastrophic climate change. The project was cancelled after concluding that a best-case scenario for rapid advances in renewable energy could only result in emissions 55 percent below the fossil fuel projections for 2050.
== Increased energy efficiency ==
Although increasing the efficiency of energy use is not energy development per se, it may be considered under the topic of energy development since it makes existing energy sources available to do work.: 22
Efficient energy use reduces the amount of energy required to provide products and services. For example, insulating a home allows a building to use less heating and cooling energy to maintain a comfortable temperature. Installing fluorescent lamps or natural skylights reduces the amount of energy required for illumination compared to incandescent light bulbs. Compact fluorescent lights use two-thirds less energy and may last 6 to 10 times longer than incandescent lights. Improvements in energy efficiency are most often achieved by adopting an efficient technology or production process.
Reducing energy use may save consumers money, if the energy savings offsets the cost of an energy efficient technology. Reducing energy use reduces emissions. According to the International Energy Agency, improved energy efficiency in buildings, industrial processes and transportation could reduce the global energy demand in 2050 to around 8% smaller than today, but serving an economy more than twice as big and a population of about 2 billion more people.
Energy efficiency and renewable energy are said to be the twin pillars of sustainable energy policy. In many countries energy efficiency is also seen to have a national security benefit because it can be used to reduce the level of energy imports from foreign countries and may slow down the rate at which domestic energy resources are depleted.
It's been discovered "that for OECD countries, wind, geothermal, hydro and nuclear have the lowest hazard rates among energy sources in production".
== Transmission ==
While new sources of energy are only rarely discovered or made possible by new technology, distribution technology continually evolves. The use of fuel cells in cars, for example, is an anticipated delivery technology. This section presents the various delivery technologies that have been important to historic energy development. They all rely in way on the energy sources listed in the previous section.
=== Shipping and pipelines ===
Coal, petroleum and their derivatives are delivered by boat, rail, or road. Petroleum and natural gas may also be delivered by pipeline, and coal via a Slurry pipeline. Fuels such as gasoline and LPG may also be delivered via aircraft. Natural gas pipelines must maintain a certain minimum pressure to function correctly. The higher costs of ethanol transportation and storage are often prohibitive.
=== Wired energy transfer ===
Electricity grids are the networks used to transmit and distribute power from production source to end user, when the two may be hundreds of kilometres away. Sources include electrical generation plants such as a nuclear reactor, coal burning power plant, etc. A combination of sub-stations and transmission lines are used to maintain a constant flow of electricity. Grids may suffer from transient blackouts and brownouts, often due to weather damage. During certain extreme space weather events solar wind can interfere with transmissions. Grids also have a predefined carrying capacity or load that cannot safely be exceeded. When power requirements exceed what's available, failures are inevitable. To prevent problems, power is then rationed.
Industrialised countries such as Canada, the US, and Australia are among the highest per capita consumers of electricity in the world, which is possible thanks to a widespread electrical distribution network. The US grid is one of the most advanced, although infrastructure maintenance is becoming a problem. CurrentEnergy provides a realtime overview of the electricity supply and demand for California, Texas, and the Northeast of the US. African countries with small scale electrical grids have a correspondingly low annual per capita usage of electricity. One of the most powerful power grids in the world supplies power to the state of Queensland, Australia.
=== Wireless energy transfer ===
Wireless power transfer is a process whereby electrical energy is transmitted from a power source to an electrical load that does not have a built-in power source, without the use of interconnecting wires. Currently available technology is limited to short distances and relatively low power level.
Orbiting solar power collectors would require wireless transmission of power to Earth. The proposed method involves creating a large beam of microwave-frequency radio waves, which would be aimed at a collector antenna site on the Earth. Formidable technical challenges exist to ensure the safety and profitability of such a scheme.
== Storage ==
Energy storage is accomplished by devices or physical media that store energy to perform useful operation at a later time. A device that stores energy is sometimes called an accumulator.
All forms of energy are either potential energy (e.g. Chemical, gravitational, electrical energy, temperature differential, latent heat, etc.) or kinetic energy (e.g. momentum). Some technologies provide only short-term energy storage, and others can be very long-term such as power to gas using hydrogen or methane and the storage of heat or cold between opposing seasons in deep aquifers or bedrock. A wind-up clock stores potential energy (in this case mechanical, in the spring tension), a battery stores readily convertible chemical energy to operate a mobile phone, and a hydroelectric dam stores energy in a reservoir as gravitational potential energy. Ice storage tanks store ice (thermal energy in the form of latent heat) at night to meet peak demand for cooling. Fossil fuels such as coal and gasoline store ancient energy derived from sunlight by organisms that later died, became buried and over time were then converted into these fuels. Even food (which is made by the same process as fossil fuels) is a form of energy stored in chemical form.
== History ==
Since prehistory, when humanity discovered fire to warm up and roast food, through the Middle Ages in which populations built windmills to grind the wheat, until the modern era in which nations can get electricity splitting the atom. Man has sought endlessly for energy sources.
Except nuclear, geothermal and tidal, all other energy sources are from current solar isolation or from fossil remains of plant and animal life that relied upon sunlight. Ultimately, solar energy itself is the result of the Sun's nuclear fusion. Geothermal power from hot, hardened rock above the magma of the Earth's core is the result of the decay of radioactive materials present beneath the Earth's crust, and nuclear fission relies on man-made fission of heavy radioactive elements in the Earth's crust; in both cases these elements were produced in supernova explosions before the formation of the Solar System.
Since the beginning of the Industrial Revolution, the question of the future of energy supplies has been of interest. In 1865, William Stanley Jevons published The Coal Question in which he saw that the reserves of coal were being depleted and that oil was an ineffective replacement. In 1914, U.S. Bureau of Mines stated that the total production was 5.7 billion barrels (910,000,000 m3). In 1956, Geophysicist M. King Hubbert deduces that U.S. oil production would peak between 1965 and 1970 and that oil production will peak "within half a century" on the basis of 1956 data. In 1989, predicted peak by Colin Campbell In 2004, OPEC estimated, with substantial investments, it would nearly double oil output by 2025
=== Sustainability ===
The environmental movement has emphasized sustainability of energy use and development. Renewable energy is sustainable in its production; the available supply will not be diminished for the foreseeable future - millions or billions of years. "Sustainability" also refers to the ability of the environment to cope with waste products, especially air pollution. Sources which have no direct waste products (such as wind, solar, and hydropower) are brought up on this point. With global demand for energy growing, the need to adopt various energy sources is growing. Energy conservation is an alternative or complementary process to energy development. It reduces the demand for energy by using it efficiently.
=== Resilience ===
Some observers contend that idea of "energy independence" is an unrealistic and opaque concept. The alternative offer of "energy resilience" is a goal aligned with economic, security, and energy realities. The notion of resilience in energy was detailed in the 1982 book Brittle Power: Energy Strategy for National Security. The authors argued that simply switching to domestic energy would not be secure inherently because the true weakness is the often interdependent and vulnerable energy infrastructure of a country. Key aspects such as gas lines and the electrical power grid are often centralized and easily susceptible to disruption. They conclude that a "resilient energy supply" is necessary for both national security and the environment. They recommend a focus on energy efficiency and renewable energy that is decentralized.
In 2008, former Intel Corporation Chairman and CEO Andrew Grove looked to energy resilience, arguing that complete independence is unfeasible given the global market for energy. He describes energy resilience as the ability to adjust to interruptions in the supply of energy. To that end, he suggests the U.S. make greater use of electricity. Electricity can be produced from a variety of sources. A diverse energy supply will be less affected by the disruption in supply of any one source. He reasons that another feature of electrification is that electricity is "sticky" – meaning the electricity produced in the U.S. is to stay there because it cannot be transported overseas. According to Grove, a key aspect of advancing electrification and energy resilience will be converting the U.S. automotive fleet from gasoline-powered to electric-powered. This, in turn, will require the modernization and expansion of the electrical power grid. As organizations such as The Reform Institute have pointed out, advancements associated with the developing smart grid would facilitate the ability of the grid to absorb vehicles en masse connecting to it to charge their batteries.
=== Present and future ===
Extrapolations from current knowledge to the future offer a choice of energy futures. Predictions parallel the Malthusian catastrophe hypothesis. Numerous are complex models based scenarios as pioneered by Limits to Growth. Modeling approaches offer ways to analyze diverse strategies, and hopefully find a road to rapid and sustainable development of humanity. Short term energy crises are also a concern of energy development. Extrapolations lack plausibility, particularly when they predict a continual increase in oil consumption.
Energy production usually requires an energy investment. Drilling for oil or building a wind power plant requires energy. The fossil fuel resources that are left are often increasingly difficult to extract and convert. They may thus require increasingly higher energy investments. If investment is greater than the value of the energy produced by the resource, it is no longer an effective energy source. These resources are no longer an energy source but may be exploited for value as raw materials. New technology may lower the energy investment required to extract and convert the resources, although ultimately basic physics sets limits that cannot be exceeded.
Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation. The peaking of world hydrocarbon production (peak oil) may lead to significant changes, and require sustainable methods of production. One vision of a sustainable energy future involves all human structures on the earth's surface (i.e., buildings, vehicles and roads) doing artificial photosynthesis (using sunlight to split water as a source of hydrogen and absorbing carbon dioxide to make fertilizer) efficiently than plants.
With contemporary space industry's economic activity and the related private spaceflight, with the manufacturing industries, that go into Earth's orbit or beyond, delivering them to those regions will require further energy development. Researchers have contemplated space-based solar power for collecting solar power for use on Earth. Space-based solar power has been in research since the early 1970s. Space-based solar power would require construction of collector structures in space. The advantage over ground-based solar power is higher intensity of light, and no weather to interrupt power collection.
== Energy technology ==
Energy technology is an interdisciplinary engineering science having to do with the efficient, safe, environmentally friendly, and economical extraction, conversion, transportation, storage, and use of energy, targeted towards yielding high efficiency whilst skirting side effects on humans, nature, and the environment.
For people, energy is an overwhelming need, and as a scarce resource, it has been an underlying cause of political conflicts and wars. The gathering and use of energy resources can be harmful to local ecosystems and may have global outcomes.
Energy is also the capacity to do work. We can get energy from food. Energy can be of different forms such as kinetic, potential, mechanical, heat, light etc. Energy is required for individuals and the whole society for lighting, heating, cooking, running, industries, operating transportation and so forth. Basically there are two types of energy depending on the source s they are;
1.Renewable Energy Sources
2.Non-Renewable Energy Sources
=== Interdisciplinary fields ===
As an interdisciplinary science Energy technology is linked with many interdisciplinary fields in sundry, overlapping ways.
Physics, for thermodynamics and nuclear physics
Chemistry for fuel, combustion, air pollution, flue gas, battery technology and fuel cells.
Electrical engineering
Engineering, often for fluid energy machines such as combustion engines, turbines, pumps and compressors.
Geography, for geothermal energy and exploration for resources.
Mining, for petrochemical and fossil fuels.
Agriculture and forestry, for sources of renewable energy.
Meteorology for wind and solar energy.
Water and Waterways, for hydropower.
Waste management, for environmental impact.
Transportation, for energy-saving transportation systems.
Environmental studies, for studying the effect of energy use and production on the environment, nature and climate change.
(Lighting Technology), for Interior and Exterior Natural as well as Artificial Lighting Design, Installations, and Energy Savings
(Energy Cost/Benefit Analysis), for Simple Payback and Life Cycle Costing of Energy Efficiency/Conservation Measures Recommended
=== Electrical engineering ===
Electric power engineering deals with the production and use of electrical energy, which can entail the study of machines such as generators, electric motors and transformers. Infrastructure involves substations and transformer stations, power lines and electrical cable. Load management and power management over networks have meaningful sway on overall energy efficiency. Electric heating is also widely used and researched.
=== Thermodynamics ===
Thermodynamics deals with the fundamental laws of energy conversion and is drawn from theoretical Physics.
=== Thermal and chemical energy ===
Thermal and chemical energy are intertwined with chemistry and environmental studies. Combustion has to do with burners and chemical engines of all kinds, grates and incinerators along with their energy efficiency, pollution and operational safety.
Exhaust gas purification technology aims to lessen air pollution through sundry mechanical, thermal and chemical cleaning methods. Emission control technology is a field of process and chemical engineering. Boiler technology deals with the design, construction and operation of steam boilers and turbines (also used in nuclear power generation, see below), drawn from applied mechanics and materials engineering.
Energy conversion has to do with internal combustion engines, turbines, pumps, fans and so on, which are used for transportation, mechanical energy and power generation. High thermal and mechanical loads bring about operational safety worries which are dealt with through many branches of applied engineering science.
=== Nuclear energy ===
Nuclear technology deals with nuclear power production from nuclear reactors, along with the processing of nuclear fuel and disposal of radioactive waste, drawing from applied nuclear physics, nuclear chemistry and radiation science.
Nuclear power generation has been politically controversial in many countries for several decades but the electrical energy produced through nuclear fission is of worldwide importance. There are high hopes that fusion technologies will one day replace most fission reactors but this is still a research area of nuclear physics.
=== Renewable energy ===
Renewable energy has many branches.
==== Wind power ====
Wind turbines convert wind energy into electricity by connecting a spinning rotor to a generator. Wind turbines draw energy from atmospheric currents and are designed using aerodynamics along with knowledge taken from mechanical and electrical engineering. The wind passes across the aerodynamic rotor blades, creating an area of higher pressure and an area of lower pressure on either side of the blade. The forces of lift and drag are formed due to the difference in air pressure. The lift force is stronger than the drag force; therefore the rotor, which is connected to a generator, spins. The energy is then created due to the change from the aerodynamic force to the rotation of the generator.
Being recognized as one of the most efficient renewable energy sources, wind power is becoming more and more relevant and used in the world. Wind power does not use any water in the production of energy making it a good source of energy for areas without much water. Wind energy could also be produced even if the climate changes in line with current predictions, as it relies solely on wind.
==== Geothermal ====
Deep within the Earth, is an extreme heat producing layer of molten rock called magma. The very high temperatures from the magma heats nearby groundwater. There are various technologies that have been developed in order to benefit from such heat, such as using different types of power plants (dry, flash or binary), heat pumps, or wells. These processes of harnessing the heat incorporate an infrastructure which has in one form or another a turbine which is spun by either the hot water or the steam produced by it. The spinning turbine, being connected to a generator, produces energy. A more recent innovation involves the use of shallow closed-loop systems that pump heat to and from structures by taking advantage of the constant temperature of soil around 10 feet deep.
==== Hydropower ====
Hydropower draws mechanical energy from rivers, ocean waves and tides. Civil engineering is used to study and build dams, tunnels, waterways and manage coastal resources through hydrology and geology. A low speed water turbine spun by flowing water can power an electrical generator to produce electricity.
==== Bioenergy ====
Bioenergy deals with the gathering, processing and use of biomasses grown in biological manufacturing, agriculture and forestry from which power plants can draw burning fuel. Ethanol, methanol (both controversial) or hydrogen for fuel cells can be had from these technologies and used to generate electricity.
==== Enabling technologies ====
Heat pumps and Thermal energy storage are classes of technologies that can enable the utilization of renewable energy sources that would otherwise be inaccessible due to a temperature that is too low for utilization or a time lag between when the energy is available and when it is needed. While enhancing the temperature of available renewable thermal energy, heat pumps have the additional property of leveraging electrical power (or in some cases mechanical or thermal power) by using it to extract additional energy from a low quality source (such as seawater, lake water, the ground, the air, or waste heat from a process).
Thermal storage technologies allow heat or cold to be stored for periods of time ranging from hours or overnight to interseasonal, and can involve storage of sensible energy (i.e. by changing the temperature of a medium) or latent energy (i.e. through phase changes of a medium, such between water and slush or ice). Short-term thermal storages can be used for peak-shaving in district heating or electrical distribution systems. Kinds of renewable or alternative energy sources that can be enabled include natural energy (e.g. collected via solar-thermal collectors, or dry cooling towers used to collect winter's cold), waste energy (e.g. from HVAC equipment, industrial processes or power plants), or surplus energy (e.g. as seasonally from hydropower projects or intermittently from wind farms). The Drake Landing Solar Community (Alberta, Canada) is illustrative. borehole thermal energy storage allows the community to get 97% of its year-round heat from solar collectors on the garage roofs, which most of the heat collected in summer. Types of storages for sensible energy include insulated tanks, borehole clusters in substrates ranging from gravel to bedrock, deep aquifers, or shallow lined pits that are insulated on top. Some types of storage are capable of storing heat or cold between opposing seasons (particularly if very large), and some storage applications require inclusion of a heat pump. Latent heat is typically stored in ice tanks or what are called phase-change materials (PCMs).
== See also ==
World energy supply and consumption
Technology
Water-energy nexus
Policy
Energy policy, Energy policy of the United States, Energy policy of China, Energy policy of India, Energy policy of the European Union, Energy policy of the United Kingdom, Energy policy of Russia, Energy policy of Brazil, Energy policy of Canada, Energy policy of the Soviet Union, Energy Industry Liberalization and Privatization (Thailand)
General
Seasonal thermal energy storage (Interseasonal thermal energy storage), Geomagnetically induced current, Energy harvesting, Timeline of sustainable energy research 2020–present
Feedstock
Raw material, Biomaterial, Energy consumption, Materials science, Recycling, Upcycling, Downcycling
Others
Thorium-based nuclear power, List of oil pipelines, List of natural gas pipelines, Ocean thermal energy conversion, Growth of photovoltaics
== References ==
== Sources ==
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016.
Serra, J. "Alternative Fuel Resource Development", Clean and Green Fuels Fund, (2006).
Bilgen, S. and K. Kaygusuz, Renewable Energy for a Clean and Sustainable Future, Energy Sources 26, 1119 (2004).
Energy analysis of Power Systems, UIC Nuclear Issues Briefing Paper 57 (2004).
Silvestre B. S., Dalcol P. R. T. (2009). "Geographical proximity and innovation: Evidences from the Campos Basin oil & gas industrial agglomeration — Brazil". Technovation. 29 (8): 546–561. doi:10.1016/j.technovation.2009.01.003.
== Journals ==
Energy Sources, Part A: Recovery, Utilization and Environmental Effects
Energy Sources, Part B: Economics, Planning and Policy
International Journal of Green Energy
== External links ==
Bureau of Land Management 2012 Renewable Energy Priority Projects
Energypedia - a wiki about renewable energies in the context of development cooperation
Hidden Health and Environmental Costs Of Energy Production and Consumption In U.S.
IEA-ECES - International Energy Agency - Energy Conservation through Energy Conservation programme.
IEA HPT TCP - International Energy Agency - Technology Collaboration Programme on Heatpumping Technologies.
IEA-SHC - International Energy Agency - Solar Heating and Cooling programme.
SDH - Solar District Heating Platform. (European Union) | Wikipedia/Energy_technology |
In control theory, advanced process control (APC) refers to a broad range of techniques and technologies implemented within industrial process control systems. Advanced process controls are usually deployed optionally and in addition to basic process controls. Basic process controls are designed and built with the process itself to facilitate basic operation, control and automation requirements. Advanced process controls are typically added subsequently, often over the course of many years, to address particular performance or economic improvement opportunities in the process.
Process control (basic and advanced) normally implies the process industries, which include chemicals, petrochemicals, oil and mineral refining, food processing, pharmaceuticals, power generation, etc. These industries are characterized by continuous processes and fluid processing, as opposed to discrete parts manufacturing, such as automobile and electronics manufacturing. The term process automation is essentially synonymous with process control.
Process controls (basic as well as advanced) are implemented within the process control system, which may mean a distributed control system (DCS), programmable logic controller (PLC), and/or a supervisory control computer. DCSs and PLCs are typically industrially hardened and fault-tolerant. Supervisory control computers are often not hardened or fault-tolerant, but they bring a higher level of computational capability to the control system, to host valuable, but not critical, advanced control applications. Advanced controls may reside in either the DCS or the supervisory computer, depending on the application. Basic controls reside in the DCS and its subsystems, including PLCs.
== Types of Advanced Process Control ==
Following is a list of well-known types of advanced process control:
Advanced regulatory control (ARC) refers to several proven advanced control techniques, such as override or adaptive gain (but in all cases, "regulating or feedback"). ARC is also a catch-all term used to refer to any customized or non-simple technique that does not fall into any other category. ARCs are typically implemented using function blocks or custom programming capabilities at the DCS level. In some cases, ARCs reside at the supervisory control computer level.
Advanced process control (APC) refers to several proven advanced control techniques, such as feedforward, decoupling, and inferential control. APC can also include Model Predictive Control, described below. APC is typically implemented using function blocks or custom programming capabilities at the DCS level. In some cases, APC resides at the supervisory control computer level.
Multivariable model predictive control (MPC) is a popular technology, usually deployed on a supervisory control computer, that identifies important independent and dependent process variables and the dynamic relationships (models) between them and often uses matrix-math based control and optimization algorithms to control multiple variables simultaneously. One requirement of MPC is that the models must be linear across the operating range of the controller. MPC has been a prominent part of APC since supervisory computers first brought the necessary computational capabilities to control systems in the 1980s.
Nonlinear MPC is similar to multivariable MPC in that it incorporates dynamic models and matrix-math based control; however, it does not require model linearity. Nonlinear MPC can accommodate processes with models with varying process gains and dynamics (i.e., dead times and lag times).
Inferential control: The concept behind inferential control is to calculate a stream property from readily available process measurements, such as temperature and pressure, that otherwise might be too costly or time-consuming to measure directly in real time. The accuracy of the inference can be periodically cross-checked with laboratory analysis. Inferential measurements can be utilized in place of actual online analyzers, whether for operator information, cascaded to base-layer process controllers, or multivariable controller CVs.
Sequential control refers to discontinuous time- and event-based automation sequences that occur within continuous processes. These may be implemented as a collection of time and logic function blocks, a custom algorithm, or a formalized sequential function chart methodology.
Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation, and genetic algorithms.
== Related Technologies ==
The following technologies are related to APC and, in some contexts, can be considered part of APC, but are generally separate technologies having their own (or in need of their own) Wiki articles.
Statistical process control (SPC), despite its name, is much more common in discrete parts manufacturing and batch process control than in continuous process control. In SPC, “process” refers to the work and quality control process, rather than continuous process control.
Batch process control (see ANSI/ISA-88) is employed in non-continuous batch processes, such as many pharmaceuticals, chemicals, and foods.
Simulation-based optimization incorporates dynamic or steady-state computer-based process simulation models to determine more optimal operating targets in real-time, i.e., periodically, ranging from hourly to daily. This is sometimes considered a part of APC, but in practice, it is still an emerging technology and is more often part of MPO.
Manufacturing planning and optimization (MPO) refers to ongoing business activity to arrive at optimal operating targets that are then implemented in the operating organization, either manually or, in some cases, automatically communicated to the process control system.
Safety instrumented system refers to a system independent of the process control system, both physically and administratively, whose purpose is to assure the basic safety of the process.
== APC Business and Professionals ==
Those responsible for the design, implementation, and maintenance of APC applications are often referred to as APC Engineers or Control Application Engineers. Usually, their education is dependent upon the field of specialization. For example, in the process industries, many APC Engineers have a chemical engineering background, combining process control and chemical processing expertise.
Most large operating facilities, such as oil refineries, employ a number of control system specialists and professionals, ranging from field instrumentation, regulatory control system (DCS and PLC), advanced process control, and control system network and security. Depending on facility size and circumstances, these personnel may have responsibilities across multiple areas or be dedicated to each area. Many process control service companies can be hired for support and services in each area.
== Artificial Intelligence and Process Control ==
The use of artificial intelligence, machine learning, and deep learning techniques in process control is also considered an advanced process control approach in which intelligence is used to optimize operational parameters further.
For decades, operations and logic in process control systems in oil and gas have been based only on physics equations that dictate parameters along with operators’ interactions based on experience and operating manuals. Artificial intelligence and machine learning algorithms can look into the dynamic operational conditions, analyze them, and suggest optimized parameters that can either directly tune logic parameters or give suggestions to operators. Interventions by such intelligent models lead to optimization in cost, production, and safety.
== Terminology ==
APC: Advanced process control, including feedforward, decoupling, inferential, and custom algorithms; usually implies DCS-based.
ARC: Advanced regulatory control, including adaptive gain, override, logic, fuzzy logic, sequence control, device control, and custom algorithms; usually implies DCS-based.
Base-Layer: Includes DCS, SIS, field devices, and other DCS subsystems, such as analyzers, equipment health systems, and PLCs.
BPCS: Basic process control system (see "base-layer")
DCS: Distributed control system, often synonymous with BPCS
MPO: Manufacturing planning and optimization
MPC: Multivariable model predictive control
SIS: Safety instrumented system
SME: Subject matter expert
== References ==
== External links ==
Article about Advanced Process Control. | Wikipedia/Advanced_process_control |
Internal control, as defined by accounting and auditing, is a process for assuring of an organization's objectives in operational effectiveness and efficiency, reliable financial reporting, and compliance with laws, regulations and policies. A broad concept, internal control involves everything that controls risks to an organization.
It is a means by which an organization's resources are directed, monitored, and measured. It plays an important role in detecting and preventing fraud and protecting the organization's resources, both physical (e.g., machinery and property) and intangible (e.g., reputation or intellectual property such as trademarks).
At the organizational level, internal control objectives relate to the reliability of financial reporting, timely feedback on the achievement of operational or strategic goals, and compliance with laws and regulations. At the specific transaction level, internal controls refers to the actions taken to achieve a specific objective (e.g., how to ensure the organization's payments to third parties are for valid services rendered.) Internal control procedures reduce process variation, leading to more predictable outcomes. Internal control is a key element of the Foreign Corrupt Practices Act (FCPA) of 1977 and the Sarbanes–Oxley Act of 2002, which required improvements in internal control in United States public corporations. Internal controls within business entities are also referred to as operational controls. The main controls in place are sometimes referred to as "key financial controls" (KFCs).
== Early history of internal control ==
Internal controls have existed from ancient times. In Hellenistic Egypt there was a dual administration, with one set of bureaucrats charged with collecting taxes and another with supervising them. In the Republic of China, the Supervising Authority (检察院; pinyin: Jiǎnchá Yùan), one of the five branches of government, is an investigatory agency that monitors the other branches of government.
== Definitions ==
There are many definitions of internal control, as it affects the various constituencies (stakeholders) of an organization in various ways and at different levels of aggregation.
Under the COSO Internal Control-Integrated Framework, a widely used framework in not only the United States but around the world, internal control is broadly defined as a process, effected by an entity's board of directors, management, and other personnel, designed to provide reasonable assurance regarding the achievement of objectives relating to operations, reporting, and compliance.
COSO defines internal control as having five components:
Control Environment-sets the tone for the organization, influencing the control consciousness of its people. It is the foundation for all other components of internal control.
Risk Assessment-the identification and analysis of relevant risks to the achievement of objectives, forming a basis for how the risks should be managed
Information and Communication-systems or processes that support the identification, capture, and exchange of information in a form and time frame that enable people to carry out their responsibilities
Control Activities-the policies and procedures that help ensure management directives are carried out.
Monitoring-processes used to assess the quality of internal control performance over time.
The COSO definition relates to the aggregate control system of the organization, which is composed of many individual control procedures.
Discrete control procedures, or controls are defined by the SEC as: "...a specific set of policies, procedures, and activities designed to meet an objective. A control may exist within a designated function or activity in a process. A control’s impact ... may be entity-wide or specific to an account balance, class of transactions or application. Controls have unique characteristics – for example, they can be: automated or manual; reconciliations; segregation of duties; review and approval authorizations; safeguarding and accountability of assets; preventing or detecting error or fraud. Controls within a process may consist of financial reporting controls and operational controls (that is, those designed to achieve operational objectives)."
== Context ==
More generally, setting objectives, budgets, plans and other expectations establish criteria for control. Control itself exists to keep performance or a state of affairs within what is expected, allowed or accepted. Control built within a process is internal in nature. It takes place with a combination of interrelated components – such as social environment effecting behavior of employees, information necessary in control, and policies and procedures. Internal control structure is a plan determining how internal control consists of these elements.
The concepts of corporate governance also heavily rely on the necessity of internal controls. Internal controls help ensure that processes operate as designed and that risk responses (risk treatments) in risk management are carried out (COSO II). In addition, there needs to be in place circumstances ensuring that the aforementioned procedures will be performed as intended: right attitudes, integrity and competence, and monitoring by managers.
== Roles and responsibilities in internal control ==
According to the COSO Framework, everyone in an organization has responsibility for internal control to some extent. Virtually all employees produce information used in the internal control system or take other actions needed to affect control. Also, all personnel should be responsible for communicating upward problems in operations, non-compliance with the code of conduct, or other policy violations or illegal actions. Each major entity in corporate governance has a particular role to play:
=== Management ===
The Chief Executive Officer (the top manager) of the organization has overall responsibility for designing and implementing effective internal control. More than any other individual, the chief executive sets the "tone at the top" that affects integrity and ethics and other factors of a positive control environment. In a large company, the chief executive fulfills this duty by providing leadership and direction to senior managers and reviewing the way they're controlling the business. Senior managers, in turn, assign responsibility for establishment of more specific internal control policies and procedures to personnel responsible for the unit's functions. In a smaller entity, the influence of the chief executive, often an owner-manager, is usually more direct. In any event, in a cascading responsibility, a manager is effectively a chief executive of his or her sphere of responsibility. Of particular significance are financial officers and their staffs, whose control activities cut across, as well as up and down, the operating and other units of an enterprise.
=== Board of directors ===
Management is accountable to the board of directors, which provides governance, guidance and oversight. Effective board members are objective, capable and inquisitive. They also have a knowledge of the entity's activities and environment, and commit the time necessary to fulfil their board responsibilities. Management may be in a position to override controls and ignore or stifle communications from subordinates, enabling a dishonest management which intentionally misrepresents results to cover its tracks. A strong, active board, particularly when coupled with effective upward communications channels and capable financial, legal and internal audit functions, is often best able to identify and correct such a problem.
== Audit roles and responsibilities ==
=== Auditors ===
The internal auditors and external auditors of the organization also measure the effectiveness of internal control through their efforts. They assess whether the controls are properly designed, implemented and working effectively, and make recommendations on how to improve internal control. They may also review Information technology controls, which relate to the IT systems of the organization. To provide reasonable assurance that internal controls involved in the financial reporting process are effective, they are tested by the external auditor (the organization's public accountants), who are required to opine on the internal controls of the company and the reliability of its financial reporting.
=== Audit committee ===
The role and the responsibilities of the audit committee, in general terms, are to: (a) Discuss with management, internal and external auditors and major stakeholders the quality and adequacy of the organization's internal controls system and risk management process, and their effectiveness and outcomes, and meet regularly and privately with the Director of Internal Audit; (b) Review and discuss with management and the external auditors and approve the audited financial statements of the organization and make a recommendation regarding inclusion of those financial statements in any public filing. Also review with management and the independent auditor the effect of regulatory and accounting initiatives as well as off-balance sheet issues in the organization's financial statements; (c) Review and discuss with management the types of information to be disclosed and the types of presentations to be made with respect to the company's earning press release and financial information and earnings guidance provided to analysts and rating agencies; (d) Confirm the scope of audits to be performed by the external and internal auditors, monitor progress and review results and review fees and expenses. Review significant findings or unsatisfactory internal audit reports, or audit problems or difficulties encountered by the external independent auditor. Monitor management's response to all audit findings; (e) Manage complaints concerning accounting, internal accounting controls or auditing matters; (f) Receive regular reports from the chief executive officer, chief financial officer and the company's other control committees regarding deficiencies in the design or operation of internal controls and any fraud that involves management or other employees with a significant role in internal controls; and (g) Support management in resolving conflicts of interest. Monitor the adequacy of the organization's internal controls and ensure that all fraud cases are acted upon.
=== Personnel benefits committee ===
The role and the responsibilities of the personnel benefits, in general terms, are to: (a) Approve and oversee the administration of the company's Executive Compensation Program; (b) Review and approve specific compensation matters for the chief executive officer, chief operating officer (if applicable), chief financial officer, general counsel, senior human resources officer, treasurer, director, corporate relations and management, and company directors; (c) Review, as appropriate, any changes to compensation matters for the officers listed above with the board; and (d)Review and monitor all human-resource related performance and compliance activities and reports, including the performance management system. They also ensure that benefit-related performance measures are properly used by the management of the organization.
=== Operating staff ===
All staff members should be responsible for reporting problems of operations, monitoring and improving their performance, and monitoring non-compliance with the corporate policies and various professional codes, or violations of policies, standards, practices and procedures. Their particular responsibilities should be documented in their individual personnel files. In performance management activities they take part in all compliance and performance data collection and processing activities as they are part of various organizational units and may also be responsible for various compliance and operational-related activities of the organization.
Staff and junior managers may be involved in evaluating the controls within their own organizational unit using a control self-assessment.
=== Continuous controls monitoring ===
Advances in technology and data analysis have led to the development of numerous tools which can automatically evaluate the effectiveness of internal controls. Used in conjunction with continuous auditing, continuous controls monitoring provides assurance on financial information flowing through the business processes.
=== Auditing standards ===
There are laws and regulations on internal control related to financial reporting in a number of jurisdictions. In the U.S. these regulations are specifically established by Sections 404 and 302 of the Sarbanes-Oxley Act. Guidance on auditing these controls is specified in
SSAE No. 18 published by the American Institute of Certified Public Accountants (AICPA)
Auditing Standard No. 5 published by Public Company Accounting Oversight Board (PCAOB)
SEC guidance which is further discussed in SOX 404 top-down risk assessment.
== Limitations ==
Internal control can provide reasonable, not absolute, assurance that the objectives of an organization will be met. The concept of reasonable assurance implies a high degree of assurance, constrained by the costs and benefits of establishing incremental control procedures.
Effective internal control implies the organization generates reliable financial reporting and substantially complies with the laws and regulations that apply to it. However, whether an organization achieves operational and strategic objectives may depend on factors outside the enterprise, such as competition or technological innovation. These factors are outside the scope of internal control; therefore, effective internal control provides only timely information or feedback on progress towards the achievement of operational and strategic objectives, but cannot guarantee their achievement.
== Describing internal controls ==
Internal controls may be described in terms of:
a) the pertinent objective or financial statement assertion
b) the nature of the control activity itself.
=== Objective or assertions categorization ===
Assertions are representations by the management embodied in the financial statements. For example, if a Financial Statement shows a balance of $1,000 worth of Fixed Assets, this implies that the management asserts that fixed assets actually exist as on the date of the financial statements, the valuation of which is worth exactly $1000 (based on historical cost or fair value depending on the reporting framework and standards) and the entity has complete right/obligation arising from such assets (e.g. if they are leased, it must be disclosed accordingly). Further such fixed assets must be disclosed and represented correctly in the financial statement according to the financial reporting framework applicable to the company.
Controls may be defined against the particular financial statement assertion to which they relate. There are five such assertions forming the acronym, "PERCV," (pronounced, "perceive"):
Presentation and disclosure: Accounts and disclosures are properly described in the financial statements of the organization.
Existence/Occurrence/Validity: Only valid or authorized transactions are processed.
Rights and obligations: Assets are the rights of the organization and the liabilities are its obligations as of a given date.
Completeness: All transactions are processed that should be.
Valuation: Transactions are valued accurately using the proper methodology, such as a specified means of computation or formula.
For example, a validity control objective might be: "Payments are made only for authorized products and services received." A typical control procedure would be: "The payable system compares the purchase order, receiving record, and vendor invoice prior to authorizing payment." Management is responsible for implementing appropriate controls that apply to all transactions in their areas of responsibility.
=== Activity categorization ===
Control activities may also be explained by the type or nature of activity. These include (but are not limited to):
Segregation of duties – separating authorization, custody, and record keeping roles to prevent fraud or error by one person.
Authorization of transactions – review of particular transactions by an appropriate person.
Retention of records – maintaining documentation to substantiate transactions.
Supervision or monitoring of operations – observation or review of ongoing operational activity.
Physical safeguards – usage of cameras, locks, physical barriers, etc. to protect property, such as merchandise inventory.
Top-level reviews – analysis of actual results versus organizational goals or plans, periodic and regular operational reviews, metrics, and other key performance indicators (KPIs).
IT general controls – Controls related to: a) Security, to ensure access to systems and data is restricted to authorized personnel, such as usage of passwords and review of access logs; and b) Change management, to ensure program code is properly controlled, such as separation of production and test environments, system and user testing of changes prior to acceptance, and controls over migration of code into production.
IT application controls – Controls over information processing enforced by IT applications, such as edit checks to validate data entry, accounting for transactions in numerical sequences, and comparing file totals with control accounts.
=== Control precision ===
Control precision describes the alignment or correlation between a particular control procedure and a given control objective or risk. A control with direct impact on the achievement of an objective (or mitigation of a risk) is said to be more precise than one with indirect impact on the objective or risk. Precision is distinct from sufficiency; that is, multiple controls with varying degrees of precision may be involved in achieving a control objective or mitigating a risk.
Precision is an important factor in performing a SOX 404 top-down risk assessment. After identifying specific financial reporting material misstatement risks, management and the external auditors are required to identify and test controls that mitigate the risks. This involves making judgments regarding both precision and sufficiency of controls required to mitigate the risks.
Risks and controls may be entity-level or assertion-level under the PCAOB guidance.
Entity-level controls are identified to address entity-level risks. However, a combination of entity-level and assertion-level controls are typically identified to address assertion-level risks. The PCAOB set forth a three-level hierarchy for considering the precision of entity-level controls. Later guidance by the PCAOB regarding small public firms provided several factors to consider in assessing precision.
== Types of internal control policies ==
Internal control plays an important role in the prevention and detection of fraud. Under the Sarbanes-Oxley Act, companies are required to perform a fraud risk assessment and assess related controls. This typically involves identifying scenarios in which theft or loss could occur and determining if existing control procedures effectively manage the risk to an acceptable level. The risk that senior management might override important financial controls to manipulate financial reporting is also a key area of focus in fraud risk assessment.
The AICPA, IIA, and ACFE also sponsored a guide published during 2008 that includes a framework for helping organizations manage their fraud risk.
== Internal controls and process improvement ==
Controls can be evaluated and improved to make a business operation run more effectively and efficiently. For example, automating controls that are manual in nature can save costs and improve transaction processing. If the internal control system is thought of by executives as only a means of preventing fraud and complying with laws and regulations, an important opportunity may be missed. Internal controls can also be used to systematically improve businesses, particularly in regard to effectiveness and efficiency.
== See also ==
Chief audit executive
Three lines of defence
== References ==
== External links ==
Organization of Supreme Audit Institutions (INTOSAI)
Committee of Sponsoring Organizations of the Treadway Commission: Internal Control – Integrated Framework (1992)
New York State Internal Control Association (NYSICA)
Rafik Ouanouki1 and Alain April (2007). "IT Process Conformance Measurement: A Sarbanes-Oxley Requirement" (PDF). Proceedings of the IWSM - Mensura 2007.{{cite web}}: CS1 maint: numeric names: authors list (link) | Wikipedia/Financial_control |
A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature".
Computerized or digital control systems are used to reliably automate many industrial operations such as power plants or automobiles. The complexity of these systems and how the designers integrate them, the roles and responsibilities of the humans that interact with the systems, and the cyber security of these highly networked systems have led to a new paradigm in research philosophy for next-generation control systems. Resilient Control Systems consider all of these elements and those disciplines that contribute to a more effective design, such as cognitive psychology, computer science, and control engineering to develop interdisciplinary solutions. These solutions consider things such as how to tailor the control system operating displays to best enable the user to make an accurate and reproducible response, how to design in cybersecurity protections such that the system defends itself from attack by changing its behaviors, and how to better integrate widely distributed computer control systems to prevent cascading failures that result in disruptions to critical industrial operations.
In the context of cyber-physical systems, resilient control systems are an aspect that focuses on the unique interdependencies of a control system, as compared to information technology computer systems and networks, due to its importance in operating our critical industrial operations.
== Introduction ==
Originally intended to provide a more efficient mechanism for controlling industrial operations, the development of digital control systems allowed for flexibility in integrating distributed sensors and operating logic while maintaining a centralized interface for human monitoring and interaction. This ease of readily adding sensors and logic through software, which was once done with relays and isolated analog instruments, has led to wide acceptance and integration of these systems in all industries. However, these digital control systems have often been integrated in phases to cover different aspects of an industrial operation, connected over a network, and leading to a complex interconnected and interdependent system. While the control theory applied is often nothing more than a digital version of their analog counterparts, the dependence of digital control systems upon the communications networks, has precipitated the need for cybersecurity due to potential effects on confidentiality, integrity and availability of the information. To achieve resilience in the next generation of control systems, therefore, addressing the complex control system interdependencies, including the human systems interaction and cybersecurity, will be a recognized challenge.
From a philosophical standpoint, advancing the area of resilient control systems requires a definition, metrics and consideration of the challenges and associated disciplinary fusion to address. From these will fall the value proposition for investment and adoption. Each of these topics will be discussed in what follows, but for perspective consider Fig.1.
== Defining resilience ==
Research in resilience engineering over the last decade has focused in two areas, organizational and information technology. Organizational resilience considers the ability of an organization to adapt and survive in the face of threats, including the prevention or mitigation of unsafe, hazardous or compromising conditions that threaten its very existence. Information technology resilience has been considered from a number of standpoints . Networking resilience has been considered as quality of service. Computing has considered such issues as dependability and performance in the face of unanticipated changes . However, based upon the application of control dynamics to industrial processes, functionality and determinism are primary considerations that are not captured by the traditional objectives of information technology. .
Considering the paradigm of control systems, one definition has been suggested that "Resilient control systems are those that tolerate fluctuations via their structure, design parameters, control structure and control parameters". However, this definition is taken from the perspective of control theory application to a control system. The consideration of the malicious actor and cyber security are not directly considered, which might suggest the definition, "an effective reconstitution of control under attack from intelligent adversaries," which was proposed. However, this definition focuses only on resilience in response to a malicious actor. To consider the cyber-physical aspects of control system, a definition for resilience considers both benign and malicious human interaction, in addition to the complex interdependencies of the control system application .
The use of the term “recovery” has been used in the context of resilience, paralleling the response of a rubber ball to stay intact when a force is exerted on it and recover its original dimensions after the force is removed. Considering the rubber ball in terms of a system, resilience could then be defined as its ability to maintain a desired level of performance or normalcy without irrecoverable consequences. While resilience in this context is based upon the yield strength of the ball, control systems require an interaction with the environment, namely the sensors, valves, pumps that make up the industrial operation. To be reactive to this environment, control systems require an awareness of its state to make corrective changes to the industrial process to maintain normalcy. With this in mind, in consideration of the discussed cyber-physical aspects of human systems integration and cyber security, as well as other definitions for resilience at a broader critical infrastructure level, the following can be deduced as a definition of a resilient control system:
A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature
Considering the flow of a digital control system as a basis, a resilient control system framework can be designed. Referring to the left side of Fig. 2, a resilient control system holistically considers the measures of performance or normalcy for the state space. At the center, an understanding of performance and priority provide the basis for an appropriate response by a combination of human and automation, embedded within a multi-agent, semi-autonomous framework. Finally, to the right, information must be tailored to the consumer to address the need and position a desirable response. Several examples or scenarios of how resilience differs and provides benefit to control system design are available in the literature.
== Areas of resilience ==
Some primary tenets of resilience, as contrasted to traditional reliability, have presented themselves in considering an integrated approach to resilient control systems. These cyber-physical tenants complement the fundamental concept of dependable or reliable computing by characterizing resilience in regard to control system concerns, including design considerations that provide a level of understanding and assurance in the safe and secure operation of an industrial facility. These tenants are discussed individually below to summarize some of the challenges to address in order to achieve resilience.
=== Human systems ===
The benign human has an ability to quickly understand novel solutions, and provide the ability to adapt to unexpected conditions. This behavior can provide additional resilience to a control system, but reproducibly predicting human behavior is a continuing challenge. The ability to capture historic human preferences can be applied to bayesian inference and bayesian belief networks, but ideally a solution would consider direct understanding of human state using sensors such as an EEG. Considering control system design and interaction, the goal would be to tailor the amount of automation necessary to achieve some level of optimal resilience for this mixed initiative response. Presented to the human would be that actionable information that provides the basis for a targeted, reproducible response.
=== Cyber security ===
In contrast to the challenges of prediction and integration of the benign human with control systems, the abilities of the malicious actor (or hacker) to undermine desired control system behavior also create a significant challenge to control system resilience. Application of dynamic probabilistic risk analysis used in human reliability can provide some basis for the benign actor. However, the decidedly malicious intentions of an adversarial individual, organization or nation make the modeling of the human variable in both objectives and motives. However, in defining a control system response to such intentions, the malicious actor looks forward to some level of recognized behavior to gain an advantage and provide a pathway to undermining the system. Whether performed separately in preparation for a cyber attack, or on the system itself, these behaviors can provide opportunity for a successful attack without detection. Therefore, in considering resilient control system architecture, atypical designs that imbed active and passively implemented randomization of attributes, would be suggested to reduce this advantage.
=== Complex networks and networked control systems ===
While much of the current critical infrastructure is controlled by a web of interconnected control systems, either architecture termed as distributed control systems (DCS) or supervisory control and data acquisition (SCADA), the application of control is moving toward a more decentralized state. In moving to a smart grid, the complex interconnected nature of individual homes, commercial facilities and diverse power generation and storage creates an opportunity and a challenge to ensuring that the resulting system is more resilient to threats. The ability to operate these systems to achieve a global optimum for multiple considerations, such as overall efficiency, stability and security, will require mechanisms to holistically design complex networked control systems. Multi-agent methods suggest a mechanism to tie a global objective to distributed assets, allowing for management and coordination of assets for optimal benefit and semi-autonomous, but constrained controllers that can react rapidly to maintain resilience for rapidly changing conditions.
== Base metrics for resilient control systems ==
Establishing a metric that can capture the resilience attributes can be complex, at least if considered based upon differences between the interactions or interdependencies. Evaluating the control, cyber and cognitive disturbances, especially if considered from a disciplinary standpoint, leads to measures that already had been established. However, if the metric were instead based upon a normalizing dynamic attribute, such a performance characteristic that can be impacted by degradation, an alternative is suggested. Specifically, applications of base metrics to resilience characteristics are given as follows for type of disturbance:
Physical disturbances:
Time latency affecting stability
Data integrity affecting stability
Cyber disturbances:
Time latency
Data confidentiality, integrity and availability
Cognitive disturbances:
Time latency in response
Data digression from desired response
Such performance characteristics exist with both time and data integrity. Time, both in terms of delay of mission and communications latency, and data, in terms of corruption or modification, are normalizing factors. In general, the idea is to base the metric on “what is expected” and not necessarily the actual initiator to the degradation. Considering time as a metrics basis, resilient and un-resilient systems can be observed in Fig. 3.
Dependent upon the abscissa metrics chosen, Fig. 3 reflects a generalization of the resiliency of a system. Several common terms are represented on this graphic, including robustness, agility, adaptive capacity, adaptive insufficiency, resiliency and brittleness. To overview the definitions of these terms, the following explanations of each is provided below:
Agility: The derivative of the disturbance curve. This average defines the ability of the system to resist degradation on the downward slope, but also to recover on the upward. Primarily considered a time based term that indicates impact to mission. Considers both short term system and longer term human responder actions.
Adaptive Capacity: The ability of the system to adapt or transform from impact and maintain minimum normalcy. Considered a value between 0 and 1, where 1 is fully operational and 0 is the resilience threshold.
Adaptive Insufficiency: The inability of the system to adapt or transform from impact, indicating an unacceptable performance loss due to the disturbance. Considered a value between 0 and -1, where 0 is the resilience threshold and -1 is total loss of operation.
Brittleness: The area under the disturbance curve as intersected by the resilience threshold. This indicates the impact from the loss of operational normalcy.
Phases of Resilient Control System Preparation and Disturbance Response:
Recon: Maintaining proactive state awareness of system conditions and degradation
Resist: System response to recognized conditions, both to mitigate and counter
Respond: System degradation has been stopped and returning system performance
Restore: Longer term performance restoration, which includes equipment replacement
Resiliency: The converse of brittleness, which for a resilience system is “zero” loss of minimum normalcy.
Robustness: A positive or negative number associated with the area between the disturbance curve and the resilience threshold, indicating either the capacity or insufficiency, respectively.
On the abscissa of Fig. 3, it can be recognized that cyber and cognitive influences can affect both the data and the time, which underscores the relative importance of recognizing these forms of degradation in resilient control designs. For cybersecurity, a single cyberattack can degrade a control system in multiple ways. Additionally, control impacts can be characterized as indicated. While these terms are fundamental and seem of little value for those correlating impact in terms like cost, the development of use cases provide a means by which this relevance can be codified. For example, given the impact to system dynamics or data, the performance of the control loop can be directly ascertained and show approach to instability and operational impact.
== Resilience manifold for design and operation ==
The very nature of control systems implies a starting point for the development of resilience metrics. That is, the control of a physical process is based upon quantifiable performance and measures, including first principles and stochastic. The ability to provide this measurement, which is the basis for correlating operational performance and adaptation, then also becomes the starting point for correlation of the data and time variations that can come from the cognitive, cyber-physical sources. Effective understanding is based upon developing a manifold of adaptive capacity that correlates the design (and operational) buffer. For a power system, this manifold is based upon the real and reactive power assets, the controllable having the latitude to maneuver, and the impact of disturbances over time. For a modern distribution system (MDS), these assets can be aggregated from the individual contributions as shown in Fig. 4. For this figure, these assets include: a) a battery, b) an alternate tie line source, c) an asymmetric P/Q-conjectured source, d) a distribution static synchronous compensator (DSTATCOM), and e) low latency, four quadrant source with no energy limit.
== Examples of resilient control system developments ==
1) When considering the current digital control system designs, the cyber security of these systems is dependent upon what is considered border protections, i.e., firewalls, passwords, etc. If a malicious actor compromised the digital control system for an industrial operation by a man-in-the-middle attack, data can be corrupted with the control system. The industrial facility operator would have no way of knowing the data has been compromised, until someone such as a security engineer recognized the attack was occurring. As operators are trained to provide a prompt, appropriate response to stabilize the industrial facility, there is a likelihood that the corrupt data would lead to the operator reacting to the situation and lead to a plant upset. In a resilient control system, as per Fig. 2, cyber and physical data is fused to recognize anomalous situations and warn the operator.
2) As our society becomes more automated for a variety of drivers, including energy efficiency, the need to implement ever more effective control algorithms naturally follow. However, advanced control algorithms are dependent upon data from multiple sensors to predict the behaviors of the industrial operation and make corrective responses. This type of system can become very brittle, insofar as any unrecognized degradation in the sensor itself can lead to incorrect responses by the control algorithm and potentially a worsened condition relative to the desired operation for the industrial facility. Therefore, implementation of advanced control algorithms in a resilient control system also requires the implementation of diagnostic and prognostic architectures to recognize sensor degradation, as well as failures with industrial process equipment associated with the control algorithms.
== Resilient control system solutions and the need for interdisciplinary education ==
In our world of advancing automation, our dependence upon these advancing technologies will require educated skill sets from multiple disciplines. The challenges may appear simply rooted in better design of control systems for greater safety and efficiency. However, the evolution of the technologies in the current design of automation has created a complex environment in which a cyber-attack, human error (whether in design or operation), or a damaging storm can wreak havoc on the basic infrastructure. The next generation of systems will need to consider the broader picture to ensure a path forward where failures do not lead to ever greater catastrophic events. One critical resource are students who are expected to develop the skills necessary to advance these designs, and require both a perspective on the challenges and the contributions of others to fulfill the need. Addressing this need, a semester course in resilient control systems was established over a decade ago at Idaho and other universities as a catalogue or special topics focus for undergraduate and graduate students. The lessons in this course were codified in a text that provides the basis for the interdisciplinary studies. In addition, other courses have been developed to provide the perspectives and relevant examples to overview the critical infrastructure issues and provide opportunity to create resilient solutions at such universities as George Mason University and Northeastern.
Through the development of technologies designed to set the stage for next generation automation, it has become evident that effective teams are comprised several disciplines. However, developing a level of effectiveness can be time-consuming, and when done in a professional environment can expend a lot of energy and time that provides little obvious benefit to the desired outcome. It is clear that the earlier these STEM disciplines can be successfully integrated, the more effective they are at recognizing each other's contributions and working together to achieve a common set of goals in the professional world. Team competition at venues such as Resilience Week will be a natural outcome of developing such an environment, allowing interdisciplinary participation and providing an exciting challenge to motivate students to pursue a STEM education.
== Standardizing resilience and resilient control system principles ==
Standards and policy that define resilience nomenclature and metrics are needed to establish a value proposition for investment, which includes government, academia and industry. The IEEE Industrial Electronics Society has taken the lead in forming a technical committee toward this end. The purpose of this committee will be to establish metrics and standards associated with codifying promising technologies that promote resilience in automation. This effort is distinct from more supply chain community focus on resilience and security, such as the efforts of ISO and NIST
== Notes ==
== References ==
Cholda, P.; Tapolcai, J.; Cinkler, T.; Wajda, K.; Jajszczyk, A. (2009), "Quality of resilience as a network reliability characterization Tool", IEEE Network, 23 (2): 11–19, doi:10.1109/mnet.2009.4804331, S2CID 8610971
DHS staff (May 2005), Critical Infrastructure Protection, Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities, GAO-05-434, US Government
Hollnagel, E.; Woods, D. D.; Leveson, N (2006), Resilience Engineering: Concepts and Precepts, Aldershot Hampshire, UK: Ashgate Publishing
Kuo, B. C. (June 1995), Digital Control Systems, Oxford University Press
Lin, J.; Sedigh, S.; Hurson, A.R. (May 2011), An Agent-Based Approach to Reconciling Data Heterogeneity in Cyber-Physical Systems, 25th IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW), pp. 93–103
Meyer, J. F. (September 2009), Defining and Evaluating Resilience: A Performability Perspective, presentation at International Workshop on Performability Modeling of Computer and Communication Systems
Mitchell, S. M.; Mannan, M. S (April 2006), "Designing Resilient Engineered Systems", Chemical Engineering Progress, 102 (4): 39–45
Rieger, C. G. (August 2010), Notional examples and benchmark aspects of a resilient control system, 3rd International Symposium on Resilient Control Systems, pp. 64–71
Rinaldi, S. M.; Peerenboom, J. P.; Kelly, T. K. (December 2001), "Identifying, Understanding and Analyzing Critical Infrastructure Interdependencies", IEEE Control Systems Magazine: 11–25
Trivedi, K. S.; Dong, S. K.; Ghosh, R. (December 2009), Resilience in Computer Systems and Networks, IEEE/ACM International Conference on Computer-Aided Design-Digest of Technical Papers, pp. 74–77
Wang, F.Y.; Liu, D. (2008), Networked Control Systems: Theory and Applications, London, UK: Springer-Verlag
Wei, D.; Ji, K. (August 2010), Resilient industrial control system (RICS): Concepts, formulation, metrics, and insights, 3rd International Symposium Resilient Control Systems (ISRCS), pp. 15–22
Wing, J. (April 2008), Cyber-Physical Systems Research Charge, St Louis, Missouri: Cyber-Physical Systems Summit
Attribution
This article incorporates public domain material from websites or documents of the United States government. Rieger, C.G.; Gertman, D.I.; McQueen, M.A. (May 2009), Resilient Control Systems: Next Generation Design Research, Catania, Italy: 2nd IEEE Conference on Human System Interaction
Rieger, Craig G.; Gertman, David I.; McQueen, Miles A. (May 2009), Resilient Control Systems: Next Generation Design Research (HSI 2009) (PDF), Idaho National Laboratory (INL) | Wikipedia/Resilient_control_systems |
In control theory, quantitative feedback theory (QFT), developed by Isaac Horowitz (Horowitz, 1963; Horowitz and Sidi, 1972), is a frequency domain technique utilising the Nichols chart (NC) in order to achieve a desired robust design over a specified region of plant uncertainty. Desired time-domain responses are translated into frequency domain tolerances, which lead to bounds (or constraints) on the loop transmission function. The design process is highly transparent, allowing a designer to see what trade-offs are necessary to achieve a desired performance level.
== Plant templates ==
Usually any system can be represented by its Transfer Function (Laplace in continuous time domain), after getting the model of a system.
As a result of experimental measurement, values of coefficients in the Transfer Function have a range of uncertainty. Therefore, in QFT every parameter of this function is included into an interval of possible values, and the system may be represented by a family of plants rather than by a standalone expression.
P
(
s
)
=
{
∏
i
(
s
+
z
i
)
∏
j
(
s
+
p
j
)
,
∀
z
i
∈
[
z
i
,
m
i
n
,
z
i
,
m
a
x
]
,
p
j
∈
[
p
j
,
m
i
n
,
p
j
,
m
a
x
]
}
{\displaystyle {\mathcal {P}}(s)=\left\lbrace {\dfrac {\prod _{i}(s+z_{i})}{\prod _{j}(s+p_{j})}},\forall z_{i}\in [z_{i,min},z_{i,max}],p_{j}\in [p_{j,min},p_{j,max}]\right\rbrace }
A frequency analysis is performed for a finite number of representative frequencies and a set of templates are obtained in the NC diagram which encloses the behaviour of the open loop system at each frequency.
== Frequency bounds ==
Usually system performance is described as robustness to instability (phase and gain margins), rejection to input and output noise disturbances and reference tracking. In the QFT design methodology these requirements on the system are represented as frequency constraints, conditions that the compensated system loop (controller and plant) could not break.
With these considerations and the selection of the same set of frequencies used for the templates, the frequency constraints for the behaviour of the system loop are computed and represented on the Nichols Chart (NC) as curves.
To achieve the problem requirements, a set of rules on the Open Loop Transfer Function, for the nominal plant
L
0
(
s
)
=
G
(
s
)
P
0
(
s
)
{\displaystyle L_{0}(s)=G(s)P_{0}(s)}
may be found. That means the nominal loop is not allowed to have its frequency value below the constraint for the same frequency, and at high frequencies the loop should not cross the Ultra High Frequency Boundary (UHFB), which has an oval shape in the center of the NC.
== Loop shaping ==
The controller design is undertaken on the NC considering the frequency constraints and the nominal loop
L
0
(
s
)
{\displaystyle L_{0}(s)}
of the system. At this point, the designer begins to introduce controller functions (
G
(
s
)
{\displaystyle G(s)}
) and tune their parameters, a process called Loop Shaping, until the best possible controller is reached without violation of the frequency constraints.
The experience of the designer is an important factor in finding a satisfactory controller that not only complies with the frequency restrictions but with the possible realization, complexity, and quality.
For this stage there currently exist different CAD (Computer Aided Design) packages to make the controller tuning easier.
== Prefilter design ==
Finally, the QFT design may be completed with a pre-filter (
F
(
s
)
{\displaystyle F(s)}
) design when it is required. In the case of tracking conditions a shaping on the Bode diagram may be used. Post design analysis is then performed to ensure the system response is satisfactory according with the problem requirements.
The QFT design methodology was originally developed for Single-Input Single-Output (SISO) and Linear Time Invariant Systems (LTI), with the design process being as described above. However, it has since been extended to weakly nonlinear systems, time varying systems, distributed parameter systems, multi-input multi-output (MIMO) systems (Horowitz, 1991), discrete systems (these using the Z-Transform as transfer function), and non minimum phase systems. The development of CAD tools has been an important, more recent development, which simplifies and automates much of the design procedure (Borghesani et al., 1994).
Traditionally, the pre-filter is designed by using the Bode-diagram magnitude information. The use of both phase and magnitude information for the design of pre-filter was first discussed in (Boje, 2003) for SISO systems. The method was then developed to MIMO problems in (Alavi et al., 2007).
== See also ==
Control engineering
Feedback
Process control
Robotic unicycle
H infinity
Optimal control
Servomechanism
Nonlinear control
Adaptive control
Robust control
Intelligent control
State space (controls)
== References ==
Horowitz, I., 1963, Synthesis of Feedback Systems, Academic Press, New York, 1963.
Horowitz, I., and Sidi, M., 1972, "Synthesis of feedback systems with large plant ignorance for prescribed time-domain tolerances," International Journal of Control, 16(2), pp. 287–309.
Horowitz, I., 1991, "Survey of Quantitative Feedback Theory (QFT)," International Journal of Control, 53(2), pp. 255–291.
Borghesani, C., Chait, Y., and Yaniv, O., 1994, Quantitative Feedback Theory Toolbox Users Guide, The Math Works Inc., Natick, MA.
Zolotas, A. (2005, June 8). QFT - Quantitative Feedback Theory. Connexions.
Boje, E. Pre-filter design for tracking error specifications in QFT, International Journal of Robust and Nonlinear Control, Vol. 13, pp. 637–642, 2003.
Alavi, SMM., Khaki-Sedigh, A., Labibi, B. and Hayes, M.J., Improved multivariable quantitative feedback design for tracking error specifications, IET Control Theory & Applications, Vol. 1, No. 4, pp. 1046–1053, 2007.
== External links ==
Mario Garcia-Sanz, Quantitative Robust Control Engineering:Theory and Applications | Wikipedia/Quantitative_feedback_theory |
In control theory, an open-loop controller, also called a non-feedback controller, is a control loop part of a control system in which the control action ("input" to the system) is independent of the "process output", which is the process variable that is being controlled. It does not use feedback to determine if its output has achieved the desired goal of the input command or process setpoint.
There are many open-loop controls, such as on/off switching of valves, machinery, lights, motors or heaters, where the control result is known to be approximately sufficient under normal conditions without the need for feedback. The advantage of using open-loop control in these cases is the reduction in component count and complexity. However, an open-loop system cannot correct any errors that it makes or correct for outside disturbances unlike a closed-loop control system.
== Open-loop and closed-loop ==
== Applications ==
An open-loop controller is often used in simple processes because of its simplicity and low cost, especially in systems where feedback is not critical. A typical example would be an older model domestic clothes dryer, for which the length of time is entirely dependent on the judgement of the human operator, with no automatic feedback of the dryness of the clothes.
For example, an irrigation sprinkler system, programmed to turn on at set times could be an example of an open-loop system if it does not measure soil moisture as a form of feedback. Even if rain is pouring down on the lawn, the sprinkler system would activate on schedule, wasting water.
Another example is a stepper motor used for control of position. Sending it a stream of electrical pulses causes it to rotate by exactly that many steps, hence the name. If the motor was always assumed to perform each movement correctly, without positional feedback, it would be open-loop control. However, if there is a position encoder, or sensors to indicate the start or finish positions, then that is closed-loop control, such as in many inkjet printers. The drawback of open-loop control of steppers is that if the machine load is too high, or the motor attempts to move too quickly, then steps may be skipped. The controller has no means of detecting this and so the machine continues to run slightly out of adjustment until reset. For this reason, more complex robots and machine tools instead use servomotors rather than stepper motors, which incorporate encoders and closed-loop controllers.
However, open-loop control is very useful and economic for well-defined systems where the relationship between input and the resultant state can be reliably modeled by a mathematical formula. For example, determining the voltage to be fed to an electric motor that drives a constant load, in order to achieve a desired speed would be a good application. But if the load were not predictable and became excessive, the motor's speed might vary as a function of the load not just the voltage, and an open-loop controller would be insufficient to ensure repeatable control of the velocity.
An example of this is a conveyor system that is required to travel at a constant speed. For a constant voltage, the conveyor will move at a different speed depending on the load on the motor (represented here by the weight of objects on the conveyor). In order for the conveyor to run at a constant speed, the voltage of the motor must be adjusted depending on the load. In this case, a closed-loop control system would be necessary.
Thus there are many open-loop controls, such as switching valves, lights, motors or heaters on and off, where the result is known to be approximately sufficient without the need for feedback.
== Combination with feedback control ==
A feed back control system, such as a PID controller, can be improved by combining the feedback (or closed-loop control) of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system in some situations.
== See also ==
Cataract, the open-loop speed controller of early beam engines
Control theory
Feed-forward
PID controller
Process control
Open-loop transfer function
== References ==
== Further reading ==
Kuo, Benjamin C. (1991). Automatic Control Systems (6th ed.). New Jersey: Prentice Hall. ISBN 0-13-051046-7.
Ziny Flikop (2004). "Bounded-Input Bounded-Predefined-Control Bounded-Output" (http://arXiv.org/pdf/cs/0411015)
Basso, Christophe (2012). "Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide". Artech House, ISBN 978-1608075577 | Wikipedia/Open_loop_control |
An industrial control system (ICS) is an electronic control system and associated instrumentation used for industrial process control. Control systems can range in size from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems (DCSs) with many thousands of field connections. Control systems receive data from remote sensors measuring process variables (PVs), compare the collected data with desired setpoints (SPs), and derive command functions that are used to control a process through the final control elements (FCEs), such as control valves.
Larger systems are usually implemented by supervisory control and data acquisition (SCADA) systems, or DCSs, and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications.
== Discrete controllers ==
The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.
Quite complex systems can be created with networks of these controllers communicating using industry-standard protocols. Networking allows the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller (PLC) or distributed control system (DCS) is more manageable or cost-effective.
== Distributed control systems ==
A distributed control system (DCS) is a digital process control system (PCS) for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally, a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralized control rooms and local on-plant monitoring and control.
A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control. It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to the equipment being controlled to reduce cabling.
A DCS typically uses custom-designed processors as controllers and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system.
The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves.
The field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either on or off, such as relay contacts or a semiconductor switch.
Distributed control systems can normally also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals.
== SCADA systems ==
Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery.
The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks.
The SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded.
== Programmable logic controllers ==
PLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
== History ==
Process control of large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process.
However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high-speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of distributed control was realised.
The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name distributed control system (DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high-speed networking and a full suite of displays and control racks.
While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.
SCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses remote terminal units (RTUs) to send supervisory data back to a control centre. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control.
The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed-loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed.
In 1993, with the release of IEC-1131, later to become IEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time, object-oriented programming (OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such as MATLAB and Simulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilize Windows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as a panel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions.
== Security ==
SCADA and PLCs are vulnerable to cyber attack. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems.
== See also ==
Automation
Plant process and emergency shutdown systems
MTConnect
OPC Foundation
Safety instrumented system (SIS)
Control system security
Operational Technology
== References ==
== Further reading ==
Guide to Industrial Control Systems (ICS) Security, SP800-82 Rev2, National Institute of Standards and Technology, May 2015.
Walker, Mark John (2012-09-08). The Programmable Logic Controller: its prehistory, emergence and application (PDF) (PhD thesis). Department of Communication and Systems Faculty of Mathematics, Computing and Technology: The Open University. Archived (PDF) from the original on 2018-06-20. Retrieved 2018-06-20.
== External links ==
"New Age of Industrial Controllers". Archived from the original on 2016-03-03.
Proview, an open source process control system
"10 Reasons to choose PC Based Control". Manufacturing Automation. February 2015. | Wikipedia/Process_control_system |
A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller.
A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
disturbance rejection (such as hills in the cruise control example above)
guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
unstable processes can be stabilized
reduced sensitivity to parameter variations
improved reference tracking performance
improved rectification of random fluctuations
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
== Open-loop and closed-loop ==
== Closed-loop transfer function ==
The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.
This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).
If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:
Y
(
s
)
=
P
(
s
)
U
(
s
)
{\displaystyle Y(s)=P(s)U(s)}
U
(
s
)
=
C
(
s
)
E
(
s
)
{\displaystyle U(s)=C(s)E(s)}
E
(
s
)
=
R
(
s
)
−
F
(
s
)
Y
(
s
)
.
{\displaystyle E(s)=R(s)-F(s)Y(s).}
Solving for Y(s) in terms of R(s) gives
Y
(
s
)
=
(
P
(
s
)
C
(
s
)
1
+
P
(
s
)
C
(
s
)
F
(
s
)
)
R
(
s
)
=
H
(
s
)
R
(
s
)
.
{\displaystyle Y(s)=\left({\frac {P(s)C(s)}{1+P(s)C(s)F(s)}}\right)R(s)=H(s)R(s).}
The expression
H
(
s
)
=
P
(
s
)
C
(
s
)
1
+
F
(
s
)
P
(
s
)
C
(
s
)
{\displaystyle H(s)={\frac {P(s)C(s)}{1+F(s)P(s)C(s)}}}
is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If
|
P
(
s
)
C
(
s
)
|
≫
1
{\displaystyle |P(s)C(s)|\gg 1}
, i.e., it has a large norm with each value of s, and if
|
F
(
s
)
|
≈
1
{\displaystyle |F(s)|\approx 1}
, then Y(s) is approximately equal to R(s) and the output closely tracks the reference input.
== PID feedback control ==
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.
A PID controller continuously calculates an error value e(t) as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal.
The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers.
The PID controller is probably the most-used feedback control design.
If u(t) is the control signal sent to the system, y(t) is the measured output and r(t) is the desired output, and e(t) = r(t) − y(t) is the tracking error, a PID controller has the general form
u
(
t
)
=
K
P
e
(
t
)
+
K
I
∫
t
e
(
τ
)
d
τ
+
K
D
d
e
(
t
)
d
t
.
{\displaystyle u(t)=K_{P}e(t)+K_{I}\int ^{t}e(\tau ){\text{d}}\tau +K_{D}{\frac {{\text{d}}e(t)}{{\text{d}}t}}.}
The desired closed loop dynamics is obtained by adjusting the three parameters KP, KI and KD, often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.
Applying Laplace transformation results in the transformed PID controller equation
u
(
s
)
=
K
P
e
(
s
)
+
K
I
1
s
e
(
s
)
+
K
D
s
e
(
s
)
{\displaystyle u(s)=K_{P}\,e(s)+K_{I}\,{\frac {1}{s}}\,e(s)+K_{D}\,s\,e(s)}
u
(
s
)
=
(
K
P
+
K
I
1
s
+
K
D
s
)
e
(
s
)
{\displaystyle u(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right)e(s)}
with the PID controller transfer function
C
(
s
)
=
(
K
P
+
K
I
1
s
+
K
D
s
)
.
{\displaystyle C(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right).}
As an example of tuning a PID controller in the closed-loop system H(s), consider a 1st order plant given by
P
(
s
)
=
A
1
+
s
T
P
{\displaystyle P(s)={\frac {A}{1+sT_{P}}}}
where A and TP are some constants. The plant output is fed back through
F
(
s
)
=
1
1
+
s
T
F
{\displaystyle F(s)={\frac {1}{1+sT_{F}}}}
where TF is also a constant. Now if we set
K
P
=
K
(
1
+
T
D
T
I
)
{\displaystyle K_{P}=K\left(1+{\frac {T_{D}}{T_{I}}}\right)}
, KD = KTD, and
K
I
=
K
T
I
{\displaystyle K_{I}={\frac {K}{T_{I}}}}
, we can express the PID controller transfer function in series form as
C
(
s
)
=
K
(
1
+
1
s
T
I
)
(
1
+
s
T
D
)
{\displaystyle C(s)=K\left(1+{\frac {1}{sT_{I}}}\right)(1+sT_{D})}
Plugging P(s), F(s), and C(s) into the closed-loop transfer function H(s), we find that by setting
K
=
1
A
,
T
I
=
T
F
,
T
D
=
T
P
{\displaystyle K={\frac {1}{A}},T_{I}=T_{F},T_{D}=T_{P}}
H(s) = 1. With this tuning in this example, the system output follows the reference input exactly.
However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.
== References == | Wikipedia/Feedback_controller |
Control Engineering (CtE) (ISSN 0010-8049) is a trade publication and web site owned by CFE Media serving the global control, instrumentation, and automation marketplace.
Established in 1954 by Technical Publishing Company, a division of Dun-Donnelley Publishing Corporation, a Dun & Bradstreet Corp. company, Control Engineering is published monthly. Common topics presented through news, product listings, feature articles, case studies and opinion, included controllers (PLCS & PACs), motors and drives, safety (machine and process), system integration (software, hardware, power supplies, components), control software (including HMI, SCADA and MES), process control, discrete control, industrial networks (fieldbus, Ethernet and wireless), sensors, robotics, I/O, and sustainable/green engineering.
Control Engineering published six other editions.
As of June 2008, total BPA audited circulation was 87,000 subscribers.
Cahners Publishing, a predecessor of Reed Business Information, acquired Technical Publishing in 1986. In April 2010, former owner Reed Business Information announced the magazine's closure; later that month, Control Engineering, Consulting-Specifying Engineer and Plant Engineering were acquired by a new company, CFE Media.
Control Engineering Magazine is available in paper, digital, and online versions. The monthly online version contains several blogs that focus on a variety of topics in the world of automation and control.
One of the blogs is written by former DARPA Grand Challenge team leader Paul Grayson. He writes about an eclectic assortment of people, parts, and products that people interested in the progress of autonomous vehicles might find interesting. Grayson is also a strong supporter of efforts to improve STEM education in the USA and has started a 4-H technology club in his neighbourhood. His adventures working with the next generation of engineers and scientists provide and interesting perspective and sometimes surprises which he reports in the blog.
== References ==
Verified Audit | Wikipedia/Control_Engineering_(magazine) |
Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous (a mix of fully automatic and wireless control), and fully autonomous (using artificial intelligence).
== Modern robots (2000-present) ==
=== Medical and surgical ===
In the medical field, robots are used to make precise movements that are difficult for humans. Robotic surgery involves the use of less-invasive surgical methods, which are “procedures performed through tiny incisions”. Robots use the da Vinci surgical method, which involves the robotic arm (which holds onto surgical instruments) and a camera. The surgeon sits on a console where he controls the robot wirelessly. The feed from the camera is projected on a monitor, allowing the surgeon to see the incisions. The system is built to mimic the movement of the surgeon’s hands and has the ability to filter slight hand tremors. But despite the visual feedback, there is no physical feedback. In other words, as the surgeon applies force on the console, the surgeon won’t be able to feel how much pressure he or she is applying to the tissue.
=== Military ===
The earliest robots used in the military dates back to the 19th century, where automatic weapons were on the rise due to developments in mass production. The first automated weapons were used in World War I, including radio-controlled, unmanned aerial vehicles (UAVs). Since the invention, the technology of ground and aerial robotic weapons continues to develop, it transitioned to become part of modern warfare. In the transition phase of the development, the robots were semi-automatic, being able to be controlled remotely by a human controller. The advancements made in sensors and processors lead to advancements in capabilities of military robots. Since the mid-20th century, the technology of artificial intelligence (A.I.) began to develop and in the 21st century, the technology transferred to warfare, and the weapons that were semi-automatous is developing to become lethal autonomous weapons systems, LAWS for short.
==== Impact ====
As the weapons are being developed to become fully autonomous, there is an ambiguous line of what is the line that separates an enemy to a civilian. There is currently a debate of whether or not artificial intelligence is able to differentiate these enemies and the question of what is morally and humanely right (for example, a child unknowingly working for the enemies).
=== Space exploration ===
Space missions involve sending robots into space in the goal of discovering more of the unknown. The robots used in space exploration have been controlled semi-autonomously. The robots that are sent to space have the ability to maneuver itself, and are self-sustaining. To allow for data collection and a controlled research, the robot is always in communications with scientists and engineers on Earth. For the National Aeronautics and Space Administration’s (NASA) Curiosity rover, which is part of their Mars exploration program, the communication between the rover and the operators are made possible by “an international network of antennas that…permits constant observation of spacecraft as the Earth rotates on its own axis”.
=== Artificial intelligence ===
Artificial intelligence (AI) is used in robotic control to make it able to process and adapt to its surroundings. It is able to be programmed to do a certain task, for instance, walk up a hill. The technology is relatively new, and is being experimented in several fields, such as the military.
==== Boston Dynamics' robots ====
Boston Dynamic’s “Spot” is an autonomous robot that uses four sensors and allows the robot to map where it is relative to its surroundings. The navigational method is called simultaneous localization and mapping, or “SLAM” for short. Spot has several operating modes and depending on the obstacles in front of the robot, it has the ability to override the manual mode of the robot and perform actions successfully. This is similar to other robots made by Boston Dynamics, like the “Atlas”, which also has similar methods of control. When the “Atlas” is being controlled, the control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment”. Instead of inputting data into every single joint of the robot, the engineers programmed the robot as a whole, which makes it more capable to adapt to its environment. The information in this source is dissimilar to other sources, except the second source, because robots vary so much depending on the situation.
== See also ==
Synthetic Neural Modeling
Control theory
Cybernetics
Remote-control vehicle
Mobile robot navigation
Robot kinematics
Simultaneous localization and mapping
Robot locomotion
Motion planning
Robot learning
Vision Based Robot Control
== References == | Wikipedia/Robotic_control |
Botany, also called plant science, is the branch of natural science and biology studying plants, especially their anatomy, taxonomy, and ecology. A botanist or plant scientist is a scientist who specialises in this field. "Plant" and "botany" may be defined more narrowly to include only land plants and their study, which is also known as phytology. Phytologists or botanists (in the strict sense) study approximately 410,000 species of land plants, including some 391,000 species of vascular plants (of which approximately 369,000 are flowering plants) and approximately 20,000 bryophytes.
Botany originated as prehistoric herbalism to identify and later cultivate plants that were edible, poisonous, and medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species.
In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately.
Modern botany is a broad subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st-century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity.
== Etymology ==
The term "botany" comes from the Ancient Greek word botanē (βοτάνη) meaning "pasture", "herbs" "grass", or "fodder"; Botanē is in turn derived from boskein (Greek: βόσκειν), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress.
== History ==
=== Early botany ===
Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Avestan writings, and in works from China purportedly from before 221 BCE.
Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (c. 371–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the "Father of Botany". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later.
Another work from Ancient Greece that made an early impact on botany is De materia medica, a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. De materia medica was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner.
In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621.
German physician Leonhart Fuchs (1501–1566) was one of "the three German fathers of botany", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification.
Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545 – c. 1611) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells (a term he coined) in cork, and a short time later in living plant tissue.
=== Early modern botany ===
During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi.
Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity.
In the 19th century botany was a socially acceptable hobby for upper-class women. These women would collect and paint flowers and plants from around the world with scientific accuracy. The paintings were used to record many species that could not be transported or maintained in other environments. Marianne North illustrated over 900 species in extreme detail with watercolor and oil paintings. Her work and many other women's botany work was the beginning of popularizing botany to a wider audience.
Botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's Grundzüge der Wissenschaftlichen Botanik, published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems.
=== Late modern botany ===
Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century.
The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.
Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides.
20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield.
Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research.
== Branches of botany ==
Botany is divided along several axes.
Some subfields of botany relate to particular groups of organisms. Divisions related to the broader historical sense of botany include bacteriology, mycology (or fungology), and phycology – respectively, the study of bacteria, fungi, and algae – with lichenology as a subfield of mycology. The narrower sense of botany as the study of embryophytes (land plants) is called phytology. Bryology is the study of mosses (and in the broader sense also liverworts and hornworts). Pteridology (or filicology) is the study of ferns and allied plants. A number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology (or graminology) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles.
Study can also be divided by guild rather than clade or grade. For example, dendrology is the study of woody plants.
Many divisions of biology have botanical subfields. These are commonly denoted by prefixing the word plant (e.g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics), or prefixing or substituting the prefix phyto- (e.g. phytochemistry, phytogeography). The study of fossil plants is called palaeobotany. Other fields are denoted by adding or substituting the word botany (e.g. systematic botany).
Phytosociology is a subfield of plant ecology that classifies and studies communities of plants.
The intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses.
Different parts of plants also give rise to their own subfields, including xylology, carpology (or fructology), and palynology, these being the study of wood, fruit and pollen/spores respectively.
Botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology.
== Scope and importance ==
The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil.
Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life.
The strictest definition of "plant" includes only the "land plants" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses.
Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years.
Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability.
=== Human nutrition ===
Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics.
Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists.
== Plant biochemistry ==
Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds.
Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product.
The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that is used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant.
Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out.
Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed.
Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period.
The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the C4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common C3 carbon fixation pathway. These biochemical strategies are unique to land plants.
=== Medicine and materials ===
Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery.
Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder.
Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin.
== Plant ecology ==
Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change.
Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest.
Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds.
=== Plants, climate and environmental change ===
Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric CO2 concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction.
== Genetics ==
Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms.
Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes.
Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." An important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. This beneficial effect is also known as hybrid vigor or heterosis. Once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression.
Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent.
Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed.
As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants.
=== Molecular genetics ===
A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally.
Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in C4 plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.
Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.
=== Epigenetics ===
Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells.
Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others.
Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate.
Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other.
== Plant evolution ==
The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident.
The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina.
Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms.
== Plant physiology ==
Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis.
Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes.
=== Plant hormones ===
Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids.
The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek auxein, to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification.
Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops.
Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack.
In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light.
== Plant anatomy and morphology ==
Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form.
All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division.
The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts.
The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant.
In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots.
Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means.
Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations.
== Systematic botany ==
Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress.
Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available).
The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related.
Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent.
From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants.
In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed.
== Symbols ==
A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols ⟨♂⟩ (Mars) for biennial plants, ⟨♃⟩ (Jupiter) for herbaceous perennials and ⟨♄⟩ (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used ⟨♄⟩ (Saturn) for neuter in addition to ⟨☿⟩ (Mercury) for hermaphroditic. The following symbols are still used:
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
Media related to Botany at Wikimedia Commons | Wikipedia/Plant_Science |
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
== Education ==
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
=== Bachelor degree ===
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
==== Pre-veterinary emphasis ====
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, the University of Nebraska–Lincoln, and the University of Minnesota, for example. This option provides knowledge of the biological and physical sciences including nutrition, reproduction, physiology, and genetics. This can prepare students for graduate studies in animal science, veterinary school, and pharmaceutical or animal science industries.
=== Graduate studies ===
In a Master of Science degree option, students take required courses in areas that support their main interest. These courses are above courses normally required for a Bachelor of Science degree in the Animal Science major. For example, in a Ph.D. degree program students take courses related to their major that are more in-depth than those for the Master of Science degree, with an emphasis on research or teaching.
Graduate studies in animal sciences are considered preparation for upper-level positions in production, management, education, research, or agri-services. Professional study in veterinary medicine, law, and business administration are among the most commonly chosen programs by graduates. Other areas of study include growth biology, physiology, nutrition, and production systems.
== Careers in Animal Science ==
There are a variety of careers available to someone with an animal science degree. Including, but not limited to, Academic researcher, Animal nutritionist, Animal physiotherapist technician, Nature conservation officer, Zookeeper, and Zoologist.
== Areas of study ==
=== Animal Behavior ===
Animal behavior is the study of how animals interact with their environment, interact with each other socially, and how they may achieve understanding of their environment. Animal behavior is examined with the framework of its development, mechanism, adaptive value, and evolution.
=== Animal Genetics ===
Animal genetics is the study of an animal genes and how they effect an animal's appearance, health, and function. The information gained from such studies is often applied to livestock breeding.
=== Veterinary Medicine ===
Veterinary medicine is a specialization within the field of medicine focusing on the diagnosis, prevention, control, and treatment of diseases that effect both wild and domesticated animals. There are three main medical positions within veterinary medicine, veterinarians, veterinary technicians, and veterinary assistants.
== See also ==
American Registry of Professional Animal Scientists
List of animal science degree-granting institutions
Zoology, the interest of all animals.
Veterinary science
== References ==
== External links ==
"Career Information." American Society of Animal Science. ASAS, 2009. Web. 29 September 2011.
http://www.asas.org American Society of Animal Science
"UNL Animal Science Department." University of Nebraska-Lincoln. UNL Institute of Agriculture and Natural Resources, 27 January 2015.
"MSU Department of Animal Science." Michigan State University. Michigan State University Department of Animal Science, 28 December 2013.
"Animal Industry Careers." Purdue University. Purdue University, 11 August 2005. Web. 5 October 2011.
http://www.ansc.purdue.edu Purdue University Animal Science | Wikipedia/Animal_Science |
Aydın Adnan Menderes University (In Turkish: Aydın Adnan Menderes Üniversitesi) is a state university founded in Aydın, Turkey in 1992. The name of the university is derived from Adnan Menderes, former Prime Minister of Turkey.
The university began its education in Efeler and Nazilli in 1992 with five faculties (Science-Literature, Nazilli School of Economics and Administrative Sciences, Medicine, Veterinary, and Agriculture faculties), three institutes, the School of Tourism and Hotel Management, and the School of Vocational Studies in Söke. Currently, the university has 21 faculties, 3 institutes, a State Conservatory, 19 Vocational Schools, and 36 Application and Research Centers.
Today, it continues its activities across 17 different locations including Efeler, Çakmar, Işıklı, İsabeyli, Didim, Nazilli, Çine, Karacasu, Kuşadası, Atça, Sultanhisar, Yenipazar, Söke, Davutlar, Bozdoğan, Köşk, and Buharkent.
In the 2023-2024 academic year, a total of 48,414 students were enrolled, including 18,717 associate degree, 26,100 undergraduate, 2,930 master's, and 667 doctoral students.
As of 2024, the university employed a total of 1,829 academic staff, including 455 professors, 250 associate professors, 376 assistant professors, 394 lecturers, and 354 research assistants.
== Units ==
=== Faculties ===
Tourism Faculty
Medical School
Faculty of Education
Faculty of Arts and Sciences
Nazilli Faculty of Economics and Administrative Sciences
Faculty of veterinary medicine
Faculty of Agriculture
Communication faculty
Engineering faculty
Aydın Faculty of Economics
Faculty of Dentistry
Söke Business Administration Faculty
Soke Architecture Faculty
Kuşadası Maritime Faculty
Faculty of Nursing
Faculty of Health Sciences
=== Institutes ===
Institute of Science and Technology
Health Sciences Institute
Institute of Social Sciences
=== Colleges ===
High school of Physical Education and Sports
Nazilli School of Applied Science
Söke Health School
School of Foreign Languages
Nazilli State Conservatory
=== Vocational Schools ===
Atça Vocational School
Aydın Vocational School
Aydin Health Services Vocational School
Bozdoğan Vocational School
Çine Vocational School
Didim Vocational School
Karacasu Vocational School
Kuyucak Vocational School
Cooperative Vocational School
Nazilli Vocational School
Nazilli Health Services Vocational School
Söke Vocational School
Söke Health Services Vocational School
Sultanhisar Vocational School
Yenipazar Vocational School
Buharkent Vocational School
== See also ==
Adnan Menderes
List of universities in Turkey
== References ==
== External links ==
Aydın Adnan Menderes University - Official website | Wikipedia/Aydın_Adnan_Menderes_University |
The State University of New York College of Environmental Science and Forestry (ESF) is a public research university in Syracuse, New York, focused on the environment and natural resources. It is part of the State University of New York (SUNY) system. ESF is immediately adjacent to Syracuse University, within which it was founded, and with which it maintains a special relationship. It is classified among "R2: Doctoral Universities – High research activity".
ESF operates education and research facilities also in the Adirondack Park (including the Ranger School in Wanakena), the Thousand Islands, elsewhere in Central New York, and Costa Rica. The college's curricula focus on the understanding, management, and sustainability of the environment and natural resources.
== History ==
=== Founding ===
The New York State College of Forestry at Syracuse University was established on July 28, 1911, through a bill signed by New York Governor John Alden Dix. The previous year, Governor Charles Hughes had vetoed a bill authorizing such a college. Both bills followed the state's defunding in 1903 of the New York State College of Forestry at Cornell. Originally a unit of Syracuse University, in 1913, the college was made a separate, legal entity.
Syracuse native and constitutional lawyer Louis Marshall, with a summer residence at Knollwood Club on Saranac Lake and a prime mover for the establishment of the Adirondack and Catskill Forest Preserve (New York), became a Syracuse University Trustee in 1910. He confided in Chancellor James R. Day his desire to have an agricultural and forestry school at the university, and by 1911 his efforts resulted in a New York State bill to fund the project: the aforementioned appropriation bill signed by Governor Dix. Marshall was elected president of the college's board of trustees at its first meeting, in 1911; at the time of his death, eighteen years later, he was still president of the board.
The first dean of the college was William L. Bray, a Ph.D., graduate from the University of Chicago, botanist, plant ecologist, biogeographer and Professor of Botany at Syracuse University. In 1907 he was made head of the botany department at Syracuse, and in 1908 he started teaching a forestry course in the basement of Lyman Hall. Bray was an associate of Gifford Pinchot, who was the first Chief of the United States Forest Service. In 1911, in addition to assuming the deanship of forestry, Bray organized the Agricultural Division at Syracuse University. He remained at Syracuse until 1943 as chair of botany and Dean of the Syracuse Graduate School.
In 1915, the same year that Dr. Bray published The Development of the Vegetation of New York State, he became one of the founding members, along with Raphael Zon and Yale School of Forestry's second dean, James W. Toumey, of the Ecological Society of America. In 1950, the 1917 "activist wing" of that Society formed today's The Nature Conservancy.
Most of the professors in the early years of the College of Forestry at Syracuse and the Department of Forestry at Cornell's New York State College of Agriculture were educated in forestry at the Yale School of Forestry. The forestry students at Syracuse but not at Cornell were referred to as "stumpies" by their classmates.
Fifty-two students were enrolled in the school's first year, the first 11 graduating two years later, in 1913. Research at the college commenced in 1912, with a study of New York state firms using lumber, including from which tree species and in what quantities.
=== Expansion ===
In 1912, the college opened its Ranger School in Wanakena, New York, in the Adirondacks. The college began enrolling women as early as 1915, but the first women to complete their degrees—one majoring in landscape engineering and two in pulp and paper—graduated in the late 1940s. The Ranger School did not enroll any women until 1973–74.
In January 1930, Governor Franklin D. Roosevelt, recommending an allocation of $600,000 towards construction of the college's second building, in honor of Louis Marshall, recently deceased, noted that: "under [Marshall's] leadership and the leadership of its late dean, Franklin Moon, the School of Forestry made giant strides until it became recognized as the premier institution of its kind in the United States". The cornerstone of Louis Marshall Memorial Hall was laid in 1931 by former Governor and presidential candidate Alfred E. Smith who was elected to assume the presidency of the college's board of trustees.
=== Affiliation with SUNY ===
With the formation of the State University of New York (SUNY) in 1948, the college became recognized as a specialized college within the SUNY system, and its name was changed to State University College of Forestry at Syracuse University. In 1972, the college's name was changed yet again to State University of New York College of Environmental Science and Forestry. Unlike other state-supported degree-granting institutions which had been created at private institutions in New York State, the New York State College of Forestry at Syracuse University was an autonomous institution not administratively part of Syracuse University. In 2000, SUNY System Administration established ESF's "primacy" among the 64 SUNY campuses and contract colleges for development of new undergraduate degree programs in Environmental Science and Environmental Studies.
== Campuses ==
=== Syracuse ===
ESF's main campus, in Syracuse, New York, is where most academic, administrative, and student activity takes place. The campus is made up of nine main buildings:
Baker Laboratory: Named after Hugh P. Baker, dean of the college from 1912 to 1920 and again 1930–33. The building is the location of several computer clusters and auditorium-style classrooms. It is home to the Department of Environmental Resources Engineering and the Division of Environmental Science. The building underwent a $37 million overhaul in the early 2000s, providing updated space for the Tropical Timber Information Center and the Nelson C. Brown Center for Ultrastructure Studies. Baker Lab is the site of ESF's NASA-affiliated Research Center. Baker Laboratory houses two multimedia lecture halls, a "smart" classroom outfitted for computer use and distance learning, and two construction management and planning studios. It also has a full-scale laboratory for materials science testing, including a modern dry kiln, a wood identification laboratory, shop facilities (including portable sawmill) and wood preservation laboratory.
Bray Hall: The building is the oldest on campus, completed in 1917, the largest building devoted to Forestry at the time. It is named after William L. Bray, a founder of the New York State College of Forestry at Syracuse University and its first dean, 1911–1912. It is the location of most administrative offices and the Department of Sustainable Resources Management. The State University Police department is in the basement.
Gateway Center: The campus' newest building, opened in March 2013, "sets a new standard for LEED buildings, producing more renewable energy than it consumes," according to Cornelius B. Murphy, Jr. The building is "designed to achieve LEED Platinum Certification". The ESF College Bookstore, Trailhead Cafe, and Office of Admissions are in the Gateway Center.
Illick Hall: The building was completed in 1968, and is home to the Department of Environmental and Forest Biology. It is named after Joseph S. Illick, a dean of the State University College of Forestry at Syracuse University. There is a large lecture hall (Illick 5) on the ground floor. Several greenhouses are on the fifth floor. The Roosevelt Wildlife Museum is also in the building.
Jahn Laboratory: Named after Edwin C. Jahn, former head of the New York State College of Forestry at Syracuse University. The building was completed in 1997. Home to the Department of Chemistry.
Marshall Hall: Named after Louis Marshall, one of the founders of the New York State College of Forestry at Syracuse University. The Alumni (Nifkin) Lounge and Marshall Auditorium are within. Twin brass plaques commemorate the contributions of Marshall and his son, alumnus Bob Marshall. Home of the Department of Environmental Studies, the Department of Landscape Architecture, and the Division of General Education.
Moon Library: Dedicated to F. Franklin Moon, an early dean of the college. Completed in 1968, along with Illick Hall. A computer cluster and student lounge are in the basement.
Walters Hall: Named after J. Henry Walters, who served on the college's board of trustees. Completed in 1969. Home to the Department of Chemical Engineering. The pilot plant in the building includes two paper machines and wood-to-ethanol processing equipment.
Centennial Hall: ESF's on-campus student dormitory, commemorating the college's 100th anniversary. The facility is capable of accommodating 280–300 freshman (in double or triple studio rooms with private bath), 116 upperclassmen (in single bedroom suits with private bath), and an additional 56 upperclassmen (in 4-bedroom, 2-bath apartments). A $31 million project, Centennial Hall opened in 2011.
Bray Hall, Marshall Hall, Illick Hall, and Moon Library border the quad. Other buildings on the Syracuse campus include one for maintenance and operations, a garage, and a greenhouse converted to office space. Among planned new buildings is a research support facility.
The historic Robin Hood Oak (photo below) is behind Bray Hall. The tree is said to have grown from an acorn brought back by a faculty member from the Sherwood Forest in England. It was the first tree listed on the National Registrar of Historic Trees in the United States.
=== Wanakena ===
Students in the forest and natural resources management curriculum may spend an academic year (48 credits) or summer at the Ranger School, in Wanakena, New York, earning an Associate of Applied Science (A.A.S.) degree in forest technology, surveying, or environmental and natural resources conservation. The campus, established in 1912, is on the east branch of the Oswegatchie River that flows into Cranberry Lake, in the northwestern part of the Adirondack Park. It includes the 3,000-acre (12 km2) James F. Dubuar Memorial Forest, named after a former director of the Ranger School.
=== Field stations and forests ===
New York
Cranberry Lake: The college's environmental and forest biology summer field program is at the Cranberry Lake Biological Station, on Cranberry Lake in the Adirondack Park.
Newcomb: The Adirondack Ecological Center and Huntington Wildlife Forest, a 15,000-acre (6,000-hectare) field station in the central Adirondack Mountains, are near Newcomb, New York. The site includes the Arbutus Great Camp, bunkhouses, and a dining center, among other facilities.
Syracuse: The Lafayette Road Experiment Station is in the City of Syracuse.
Thousand Islands: The Thousand Islands Biological Station and Ellis International Laboratory are in the Thousand Islands, New York.
Tully: ESF's Tully Field Station and the Svend O. Heiberg Memorial Forest, a 3,800-acre (1,500-hectare) research forest, are in Tully, New York.
Warrensburg: The Charles Lathrop Pack Demonstration Forest and NYS Department of Environmental Conservation's Environmental Education Camp are near Warrensburg, New York.
Follensby: Follensby Park, the 14,600-acre property near Tupper Lake where Ralph Waldo Emerson held his historic philosophers camp. The announcement was made during a virtual press conference on Tuesday,February 13,2024.
Costa Rica
The Arturo and Maria Sundt Field Station, ESF's first international field station, is used for research and teaching. A former farm, it is near the town of Coyolito, in the province of Guanacaste, Costa Rica, approximately one mile (1.6 km) from the Gulf of Nicoya on the country's west coast.
== Academics ==
The ESF mission statement is "to advance knowledge and skills and to promote the leadership necessary for the stewardship of both the natural and designed environments." ESF is a "specialized institution" of the State University of New York, meaning that curricula focus primarily on one field, the college's being environmental management and stewardship. Students may supplement their education with courses taken at Syracuse University. ESF has academic departments in the fields of chemistry; environmental and forest biology; environmental resources engineering; environmental studies; sustainable resources management; landscape architecture; and chemical engineering. Environmental science programs offer students integrative degrees across the natural sciences.
The admission rate for applicants to ESF is 83 percent (Fall 2023). ESF is ranked at 74th in the 2025 US News & World Report rankings of the top public national universities. Furthermore, ESF is tied at 144th in the 2025 US News & World Report list of the best National Universities (both public and private). U.S. News & World Report ranked ESF as 64th best graduate school in Environmental/ Environmental Health Engineering category in 2016. The Washington Monthly College Guide ranked ESF No. 49 among the nation's top service-oriented colleges and universities for 2012 (and sixth in "community service participation and hours served").
Forbes Magazine ranked ESF #54 in its listing of "America's Best College Buys" for 2012. Forbes.com has also ranked ESF at No. 3 on its 2010 list of the 20 best colleges for women in science, technology, engineering and mathematics (STEM). ESF is listed at No. 2, ahead of top programs like Duke, Cornell and Yale, among the best college environmental programs in the nation by Treehugger.com, a website devoted to sustainability and environmental news. In 2007, DesignIntelligence magazine ranked ESF's undergraduate and graduate programs in "Landscape Architecture", respectively at No. 12 and No. 9 in the United States.
The Online College Database ranked ESF at No. 6 on its list of "50 Colleges Committed to Saving the Planet" for 2013. The ranking relates in part to one of the school's newest programs, Sustainable Energy Management. Launched in 2013, the program focuses on energy markets, management, and resources. Global issues such as responsible energy use and development of sustainable energy sources are critical focal points in the STEM major.
== Research ==
ESF is classified as a "Carnegie R2 Doctoral Universities: High Research Activity" institution. The first research report published in 1913 by the College of Forestry was the result of the above noted USDA Forest Service supported study of the wood-using industries of New York State. Since that time, the research initiatives of the State University of New York College of Environmental Science and Forestry (ESF) have expanded greatly as faculty and students conduct pioneering studies, many with a global reach. ESF researchers delve into topics well beyond the boundaries of central New York. Recent international sites of research interest include Madagascar, the Amazon floodplains, Mongolia and the Galapagos Islands. Vermont and the Sierra Nevada are other locales within the US where recent research has focused. Current research efforts include the Willow Biomass Project and the American Chestnut Research and Restoration Project which produced the Darling 58 chestnut tree.
== Campus life ==
Many students identify themselves as a "Stumpy" (or "Stumpie"). The nickname was given to students by their neighbors at Syracuse University, probably in the 1920s, and most-likely refers to forestry "stump jumpers". Although originally used as an insult, today, most students embrace the nickname with pride.
Students at the Syracuse campus enjoy many activities on and off campus. There are a number of student clubs and organizations at ESF, including the Mighty Oaks Student Assembly (formerly United Students Association), Graduate Student Association, the Guy A. Baldassare Birding Club, the Student Environmental Education Coalition, the Woodsmen Team, Bob Marshall Club, Alpha Xi Sigma Honor Society, Soccer Team, Sigma Lambda Alpha, The Knothole (weekly newspaper), Papyrus Club, The Empire Forester (yearbook), Landscape Architecture Club (formally the Mollet Club), Forest Engineers Club, Environmental Studies Student Organization, Habitat for Humanity, Ecologue (yearly journal), the Bioethics Society, Green Campus Initiative, Baobab Society, and the Sustainable Energy Club. Wanakena students have their own woodsmen and ice hockey teams. A number of professional organizations are also open to student membership, including the Society of American Foresters, The Wildlife Society, Conservation Biology club, American Fisheries Association, and the (defunct) American Water Resources Association.
ESF has an agreement with adjacent Syracuse University that allows ESF students to enjoy many amenities offered by SU. ESF students take courses at their sister institution, can apply for admission to concurrent degree and joint certificate programs, and may join any SU organization except for NCAA sports teams. SU students are also welcome to enroll in ESF classes. Because of this, students feel a certain degree of integration with the Syracuse University community. Every May, ESF holds a joint commencement ceremony with Syracuse University in the Carrier Dome. ESF's baccalaureate diplomas bear the seals of the State University of New York and Syracuse University.
Students also enjoy a variety of shops, restaurants, museums, and theaters in Syracuse, and nearby Marshall Street and Westcott Street.
== Gateway Center ==
ESF has launched several programs, within the confines of campus and other locations, to reduce its carbon emissions. The Gateway Center utilizes sustainable energy resources to generate power and heat utilized across the campus. The building includes a state-of-the-art, combined heat-and-power (CHP) system, producing 65% of campus heating needs along with 20% of its electrical needs. The CHP system uses biomass to drive a steam turbine and produce electricity, while natural gas is used for steam heating along with additional electricity. It has been estimated this building alone is responsible for reducing ESF's carbon footprint by 22%.
Increased global awareness of global warming and reduced nonrenewable resources has driven ESF to invest in biomass. Biomass is a renewable resource that draws light energy, carbon dioxide, and water from the environment; in return oxygen is released. It can be harvested without negatively affecting the environment. For this reason, ESF launched a program to grow its own biomass, known as the Willow Biomass Project. Benefits of woody willow include, high yields and fast growth times, quick re-sprouting, and high heat energy is produced when burned. Woody willow also increases habitat diversity significantly contributes to carbon neutrality.
The Gateway Center was one of the final stages in the school's Climate Action Plan, that encompasses the vision of carbon neutrality and reduced fossil fuel dependence by 2015. Currently, the school rests in Phase III of the program and is on track to reach its goal. Included in Phase III is the opening of The Gateway Center, retrofits to Illick Hall, and rooftop greenhouse replacement. One other advancement towards carbon neutrality can be seen on top of the campus's buildings. Rooftop gardens provide reduced energy consumption and water runoff. Shrubbery, soil thickness, and moisture content all can contribute to increased energy savings. Gateway and other buildings on campus utilize rooftop gardens to reduce energy consumption and water runoff.
== Athletics ==
The SUNY ESF athletic teams are called the Mighty Oaks. The college is a member the United States Collegiate Athletic Association (USCAA), primarily competing in the Hudson Valley Intercollegiate Athletic Conference (HVIAC) since about the 2004–05 academic year.
ESF competes in 11 intercollegiate varsity sports: Men's sports include basketball, cross country, golf, soccer and track & field; while women's sports include cross country, soccer and track & field; and co-ed sports include bass fishing and timber sports.
=== Cross country ===
The school's men's cross-country team are eight-time USCAA national champions (2011–2014; 2021–2024). The women's cross-country team are five-time USCAA national champions (2018; 2021–2024). In 2024, ESF became the only USCAA school to win 4 consecutive titles for both men's and women's competition in the same sport.
=== Soccer ===
The men's soccer team was invited to the 2012 USCAA National Championship Tournament in Asheville, North Carolina, making it to the semifinals.
=== Timber sports ===
ESF has a long tradition of competing in intercollegiate woodsman competitions in the northeastern US and eastern Canada. The team came in first in both the men's and women's divisions of the northeastern US and Canadian 2012 spring meet. Students at the SUNY-ESF Ranger School, in Wanakena, compete as the Blue Ox Woodsmen team.
=== Club sports ===
In addition to the intercollegiate USCAA and woodsman teams, ESF students participate on club sports teams at both ESF and Syracuse University, including ESF's competitive bass fishing team, and SU's quidditch team. Students at the Ranger School participate in the Ranger School Hockey Club.
=== Athletics history ===
In one notable part of the college's history, Laurie D. Cox, professor of Landscape Architecture, was responsible for establishing Syracuse University's renowned lacrosse program in 1916, including players from the New York State College of Forestry.
== Affiliation with Syracuse University ==
ESF was founded in 1911 as the New York State College of Forestry at Syracuse University, under the leadership of Syracuse University Trustee Louis Marshall, with the active support of Syracuse University Chancellor Day. Its founding followed several years after the cessation of state funding to the earlier New York State College of Forestry at Cornell.
ESF is an autonomous institution, administratively separate from Syracuse University, while some resources, facilities and infrastructure are shared. The two schools share a common Schedule of Classes; students may take courses at both institutions, and baccalaureate diplomas from ESF bear the Syracuse University seal along with that of the State University of New York. A number of concurrent degree programs and certificates are offered between the schools. ESF receives an annual appropriation as part of the SUNY budget and the state builds and maintains all of the college's educational facilities. The state has somewhat similar financial and working relationships with five statutory colleges that are at Alfred University and Cornell University, although unlike ESF, these statutory institutions are legally and technically part of their respective host institutions and are administered by them as well.
ESF faculty, students, and students' families join those from Syracuse University (SU) in a joint convocation ceremony at the beginning of the academic year in August and combined commencement exercises in May. ESF and SU students share access to library resources, recreational facilities, student clubs, and activities at both institutions, except for the schools' intercollegiate athletics teams, affiliated with the USCAA and NCAA, respectively.
== Traditions ==
The best known tradition among ESF students is that walking across the quad is shunned. The tradition, which dates back to at least the early 1960s, is intended to inhibit tracks from being worn into the lawn. Hecklers have been known to yell and even tackle people walking across the quad. However, other activities such as frisbee and soccer are encouraged on the Quad.
Eustace B. Nifkin, ESF's previous mascot, is an unofficial student. He first appeared in the 1940s after a group of students summering in the Adirondacks thought him up. Ever since, he has appeared on class rosters, written articles for The Knothole, and sent mail to the college from around the world. He has a girlfriend, the lesser-known Elsa S. Freeborn. SUNY granted him a bachelor's degree in 1972. The Alumni Lounge in Marshall Hall is dedicated to Nifkin.
Another well known legend is that of Chainer or Chainsaw who supposedly graduated in 1993.
Traditional events include:
== Notable alumni ==
More than 19,000 have graduated from ESF since its founding in 1911. The college's Alumni Association was founded 14 years later, in 1925. Notable alumni include:
== Environmental leadership ==
From soon after its founding, ESF affiliated individuals have been responsible for establishing and leading prominent scientific and advocacy organizations around the world focused on the environment. Others have provided leadership to governmental environmental agencies.
== See also ==
Adirondack High Peaks, ESF's origins and inspiration
Adirondack Park Agency visitor interpretive centers
History of the New York State College of Forestry
List of heads of the New York State College of Forestry
François André Michaux laid the foundation for American forestry with his work, The North American Sylva {akin to John James Audubon "The Birds of America"} starting in 1811.
== References ==
== External links ==
Official website
Official athletics website
ESF's regional campuses: drone tour | Wikipedia/State_University_of_New_York_College_of_Environmental_Science_and_Forestry |
The Bachelor of Computer Information Systems, also known as Bachelor of Computer & Information Science by the University of Oregon and The Ohio State University, (abbreviated BSc CIS) is an undergraduate or bachelor's degree that focuses on practical applications of technology to support organizations while adding value to their offerings. In order to apply technology effectively in this manner, a broad range of subjects are covered, such as communications, business, networking, software design, and mathematics. This degree encompasses the entirety of the Computing field and therefore is very useful when applying to computing positions of various sectors.
Some computer information systems programs have received accreditation from ABET, the recognized U.S. accreditor of college and university programs in applied science, computing, engineering, and technology.
== References == | Wikipedia/Bachelor_of_Computer_Information_Systems |
A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation; or to a group of computers that are linked and function together, such as a computer network or computer cluster.
A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users.
Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries.
Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved.
== Etymology ==
It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women.
The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions.
== History ==
=== Pre-20th century ===
Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example.
The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century.
Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD.
The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.
The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.
The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft.
In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates.
In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.
In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences.
=== First computer ===
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century.
After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
=== Electromechanical calculating machine ===
In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like
a
x
(
y
−
z
)
2
{\displaystyle a^{x}(y-z)^{2}}
, for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine.
=== Analog computers ===
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson.
The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).
=== Digital computers ===
==== Electromechanical ====
Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers.
By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries.
Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer.
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete.
Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe.
==== Vacuum tubes and digital electronic circuits ====
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.
During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February.
Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process.
The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls".
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.
=== Modern computers ===
==== Concept of modern computer ====
The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
==== Stored programs ====
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.
The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job.
==== Transistors ====
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell.
The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics.
==== Integrated circuits ====
The next great advance in computing power came with the advent of the integrated circuit (IC).
The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952.
The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce.
Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide.
Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs.
The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel. In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip.
System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power.
=== Mobile computers ===
The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s.
These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin.
== Types ==
Computers can be classified in a number of different ways, including:
=== By architecture ===
Analog computer
Digital computer
Hybrid computer
Harvard architecture
Von Neumann architecture
Complex instruction set computer
Reduced instruction set computer
=== By size, form-factor and purpose ===
Supercomputer
Mainframe computer
Minicomputer (term no longer used), Midrange computer
Server
Rackmount server
Blade server
Tower server
Personal computer
Workstation
Microcomputer (term no longer used)
Home computer (term fallen into disuse)
Desktop computer
Tower desktop
Slimline desktop
Multimedia computer (non-linear editing system computers, video editing PCs and the like, this term is no longer used)
Gaming computer
All-in-one PC
Nettop (Small form factor PCs, Mini PCs)
Home theater PC
Keyboard computer
Portable computer
Thin client
Internet appliance
Laptop computer
Desktop replacement computer
Gaming laptop
Rugged laptop
2-in-1 PC
Ultrabook
Chromebook
Subnotebook
Smartbook
Netbook
Mobile computer
Tablet computer
Smartphone
Ultra-mobile PC
Pocket PC
Palmtop PC
Handheld PC
Pocket computer
Wearable computer
Smartwatch
Smartglasses
Single-board computer
Plug computer
Stick PC
Programmable logic controller
Computer-on-module
System on module
System in a package
System-on-chip (Also known as an Application Processor or AP if it lacks circuitry such as radio circuitry)
Microcontroller
=== Unconventional computers ===
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer, a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer.
== Hardware ==
The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware.
=== History of computing hardware ===
=== Other hardware topics ===
A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
=== Input devices ===
When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:
Computer keyboard
Digital camera
Graphics tablet
Image scanner
Joystick
Microphone
Mouse
Overlay keyboard
Real-time clock
Trackball
Touchscreen
Light pen
=== Output devices ===
The means through which computer gives output are known as output devices. Some examples of output devices are:
Computer monitor
Printer
PC speaker
Projector
Sound card
Graphics card
=== Control unit ===
The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.
The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.
Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.
=== Central processing unit (CPU) ===
The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor.
=== Arithmetic logic unit (ALU) ===
The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.
=== Memory ===
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties:
random-access memory or RAM
read-only memory or ROM
RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
=== Input/output (I/O) ===
I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry.
=== Multitasking ===
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.
Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.
=== Multiprocessing ===
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.
== Software ==
Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware".
=== Languages ===
There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications.
=== Programs ===
The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
==== Stored program architecture ====
This section applies to most common RAM machine–based computers.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:
Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.
==== Machine code ====
In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.
==== Programming language ====
Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
===== Low-level languages =====
Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC. Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80.
===== High-level languages =====
Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
==== Program design ====
Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.
==== Bugs ====
Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design. Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.
== Networking and the Internet ==
Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.
In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet.
The emergence of networking involved a redefinition of the nature and boundaries of computers. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s, computer networking become almost ubiquitous, due to the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL.
The number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
== Future ==
There is active research to make unconventional computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
=== Computer architecture paradigms ===
There are many types of computer architectures:
Quantum computer vs. Chemical computer
Scalar processor vs. Vector processor
Non-Uniform Memory Access (NUMA) computers
Register machine vs. Stack machine
Harvard architecture vs. von Neumann architecture
Cellular architecture
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.
=== Artificial intelligence ===
In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans.
== Professions and organizations ==
As the use of computers has spread throughout society, there are an increasing number of careers involving computers.
The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
== See also ==
== Notes ==
== References ==
== Sources ==
== External links ==
Media related to Computers at Wikimedia Commons
Wikiversity has a quiz on this article | Wikipedia/Computer_systems |
A Master of Science in Information Technology (abbreviated M.Sc.IT, MScIT or MSIT) is a master's degree in the field of information technology awarded by universities in many countries or a person holding such a degree. The MSIT degree is designed for those managing information technology, especially the information systems development process. The MSIT degree is functionally equivalent to a Master of Information Systems Management, which is one of several specialized master's degree programs recognized by the Association to Advance Collegiate Schools of Business (AACSB).
One can become a software engineer and data scientist after completing an MSIT degree.
== Curriculum ==
A joint committee of Association for Information Systems (AIS) and Association for Computing Machinery (ACM) members develop a model curriculum for the Master of Science in Information Systems (MSIT). The most recent version of the MSIS Model Curriculum was published in 2016.
== Course and variants ==
The course of study is concentrated around the Information Systems discipline. The core courses are (typically) Systems analysis, Systems design, Data Communications, Database design, Project management and Security.
The degree typically includes coursework in both computer science and business skills, but the core curriculum might depend on the school and result in other degrees and specializations, including:
Master of Science (Information Technology) M.Sc.(I.T)
Master of Computer Applications (MCA)
Master in Information Science (MIS)
Master of Science in Information and Communication Technologies (MS-ICT)
Master of Science in Information Systems Management (MISM)
Master of Science in Information Technology (MSIT or MS in IT)
Master of Computer Science (MCS)
Master of Science in Information Systems (MSIS)
Master of Science in Management of Information Technology (M.S. in MIT)
Master of Information Technology (M.I.T.)
Master of IT (M. IT or MIT) in Denmark
Candidatus/candidata informationis technologiæ (Cand. it.) in Denmark
Master of Information Science and Technology (M.I.S.T.) from The University of Tokyo and Osaka University, Japan
== See also ==
ABET - Accreditation Board for Engineering and Technology (United States)
List of master's degrees
Bachelor of Computer Information Systems
Bachelor of Science in Information Technology
Master of Science in Information Systems
== References == | Wikipedia/Master_of_Science_in_Information_Technology |
The Bachelor of Computer Science (abbreviated BCompSc or BCS) is a bachelor's degree for completion of an undergraduate program in computer science. In general, computer science degree programs emphasize the mathematical and theoretical foundations of computing.
== Typical requirements ==
Because computer science is a wide field, courses required to earn a bachelor of computer science degree vary. A typical list of course requirements includes topics such as:
Computer programming
Programming paradigms
Algorithms
Data structures
Logic & Computation
Computer architecture
Some schools may place more emphasis on mathematics and require additional courses such as:
Linear algebra
Calculus
Probability theory and statistics
Combinatorics and discrete mathematics
Differential calculus and mathematics
Beyond the basic set of computer science courses, students can typically choose additional courses from a variety of different fields, such as:
Theory of computation
Operating systems
Numerical computation
Compilers, compiler design
Real-time computing
Distributed systems
Computer networking
Data communication
Computer graphics
Artificial intelligence
Human-computer interaction
Information theory
Software testing
Information assurance
Quality assurance
Some schools allow students to specialize in a certain area of computer science.
== Related degrees ==
Bachelor of Software Engineering
Bachelor of Science in Information Technology
Bachelor of Computing
Bachelor of Information Technology
Bachelor of Computer Information Systems
== See also ==
Computer science
Computer science and engineering
Bachelor of Business Information Systems
== References == | Wikipedia/Bachelor_of_Computer_Science |
A doctorate (from Latin doctor, meaning "teacher") or doctoral degree is a postgraduate academic degree awarded by universities and some other educational institutions, derived from the ancient formalism licentia docendi ("licence to teach").
In most countries, a research degree qualifies the holder to teach at university level in the degree's field or work in a specific profession. There are a number of doctoral degrees; the most common is the Doctor of Philosophy (PhD), awarded in many different fields, ranging from the humanities to scientific disciplines.
Many universities also award honorary doctorates to individuals deemed worthy of special recognition, either for scholarly work or other contributions to the university or society.
== History ==
=== Middle Ages ===
The term doctor derives from Latin, meaning "teacher" or "instructor". The doctorate (Latin: doctoratus) appeared in medieval Europe as a license to teach Latin (licentia docendi) at a university. Its roots can be traced to the early church in which the term doctor referred to the Apostles, Church Fathers, and other Christian authorities who taught and interpreted the Bible.
The right to grant a licentia docendi (i.e. the doctorate) was originally reserved to the Catholic Church, which required the applicant to pass a test, take an oath of allegiance, and pay a fee. The Third Council of the Lateran of 1179 guaranteed access—at that time essentially free of charge—to all able applicants. Applicants were tested for aptitude. This right remained a bone of contention between the church authorities and the universities, slowly distancing themselves from the Church. In 1213 the right was granted by the pope to the University of Paris, where it became a universal license to teach (licentia ubique docendi). However, while the licentia continued to hold a higher prestige than the bachelor's degree (baccalaureus), the latter was ultimately reduced to an intermediate step to the master's degree (magister) and doctorate, both of which now became the accepted teaching qualifications. According to Keith Allan Noble (1994), the first doctoral degree was awarded in medieval Paris around 1150 by the University of Paris.
George Makdisi theorizes that the ijazah issued in early Islamic madrasahs was the origin of the doctorate later issued in medieval European universities. Alfred Guillaume and Syed Farid al-Attas agree that there is a resemblance between the ijazah and the licentia docendi. However, Toby Huff and others reject Makdisi's theory. Devin J. Stewart notes a difference in the granting authority (individual professor for the ijzazah and a corporate entity in the case of the university doctorate).
=== 17th and 18th centuries ===
The doctorate of philosophy developed in Germany in the 17th century (likely c. 1652). The term "philosophy" does not refer here to the field or academic discipline of philosophy; it is used in a broader sense under its original Greek meaning of "love of wisdom". In most of Europe, all fields (history, philosophy, social sciences, mathematics, and natural philosophy/natural sciences) were traditionally known as philosophy, and in Germany and elsewhere in Europe the basic faculty of liberal arts was known as the "faculty of philosophy". The Doctorate of Philosophy adheres to this historic convention, even though most degrees are not for the study of philosophy. Chris Park explains that it was not until formal education and degree programs were standardized in the early 19th century that the Doctorate of Philosophy was reintroduced in Germany as a research degree, abbreviated as Dr. phil. (similar to Ph.D. in Anglo-American countries). Germany, however, differentiated then in more detail between doctorates in philosophy and doctorates in the natural sciences, abbreviated as Dr. rer. nat. and also doctorates in the social/political sciences, abbreviated as Dr. rer. pol., similar to the other traditional doctorates in medicine (Dr. med.) and law (Dr. jur.).
University doctoral training was a form of apprenticeship to a guild. The traditional term of study before new teachers were admitted to the guild of "Masters of Arts" was seven years, matching the apprenticeship term for other occupations. Originally the terms "master" and "doctor" were synonymous, but over time the doctorate came to be regarded as a higher qualification than the master's degree.
University degrees, including doctorates, were originally restricted to men. The first women to be granted doctorates were Juliana Morell in 1608 at Lyons or maybe Avignon (she "defended theses" in 1606 or 1607, although claims that she received a doctorate in canon law in 1608 have been discredited), Elena Cornaro Piscopia in 1678 at the University of Padua, Laura Bassi in 1732 at Bologna University, Dorothea Erxleben in 1754 at Halle University and María Isidra de Guzmán y de la Cerda in 1785 at Complutense University, Madrid.
=== Modern times ===
The use and meaning of the doctorate have changed over time and are subject to regional variations. For instance, until the early 20th century, few academic staff or professors in English-speaking universities held doctorates, except for very senior scholars and those in holy orders. After that time, the German practice of requiring lecturers to have completed a research doctorate spread. Universities' shift to research-oriented education (based upon the scientific method, inquiry, and observation) increased the doctorate's importance. Today, a research doctorate (PhD) or its equivalent (as defined in the US by the NSF) is generally a prerequisite for an academic career. However, many recipients do not work in academia.
Professional doctorates developed in the United States from the 19th century onward. The first professional doctorate offered in the United States was the MD at Kings College (now Columbia University) after the medical school's founding in 1767. However, this was not a professional doctorate in the modern American sense. It was awarded for further study after the qualifying Bachelor of Medicine (MB) rather than a qualifying degree. The MD became the standard first degree in medicine in the US during the 19th century, but as a three-year undergraduate degree. It did not become established as a graduate degree until 1930. As the standard qualifying degree in medicine, the MD gave that profession the ability (through the American Medical Association, established in 1847 for this purpose) to set and raise standards for entry into professional practice.In the shape of the German-style PhD, the modern research degree was first awarded in the US in 1861, at Yale University. This differed from the MD in that the latter was a vocational "professional degree" that trained students to apply or practice knowledge rather than generate it, similar to other students in vocational schools or institutes. In the UK, research doctorates initially took higher doctorates in Science and Letters, first introduced at Durham University in 1882. The PhD spread to the UK from the US via Canada and was instituted at all British universities from 1917. The first (titled a DPhil) was awarded at the University of Oxford.
Following the MD, the next professional doctorate in the US, the Juris Doctor (JD), was established by the University of Chicago in 1902. However, it took a long time to be accepted, not replacing the Bachelor of Laws (LLB) until the 1960s, by which time the LLB was generally taken as a graduate degree. Notably, the JD and LLB curriculum were identical, with the degree being renamed as a doctorate, and it (like the MD) was not equivalent to the PhD, raising criticism that it was "not a 'true Doctorate'". When professional doctorates were established in the UK in the late 1980s and early 1990s, they did not follow the US model. Still, they were set up as research degrees at the same level as PhDs but with some taught components and a professional focus for research work.
Now usually called higher doctorates in the United Kingdom, the older-style doctorates take much longer to complete since candidates must show themselves to be leading experts in their subjects. These doctorates are less common than the PhD in some countries and are often awarded honoris causa. The habilitation is still used for academic recruitment purposes in many countries within the EU. It involves either a long new thesis (a second book) or a portfolio of research publications. The habilitation (highest available degree) demonstrates independent and thorough research, experience in teaching and lecturing, and, more recently, the ability to generate supportive funding. The habilitation follows the research doctorate, and in Germany, it can be a requirement for appointment as a Privatdozent or professor.
== Types ==
Since the Middle Ages, the number and types of doctorates awarded by universities have proliferated throughout the world. Practice varies from one country to another. While a doctorate usually entitles a person to be addressed as "doctor", the use of the title varies widely depending on the type and the associated occupation.
=== Research doctorate ===
Research doctorates are awarded in recognition of publishable academic research, at least in principle, in a peer-reviewed academic journal. The best-known research degree in the English-speaking world is the Doctor of Philosophy (abbreviated PhD or, at a small number of British universities, DPhil) awarded in many countries throughout the world. In the US, for instance, although the most typical research doctorate is the PhD, accounting for about 98% of the research doctorates awarded, there are more than 15 other names for research doctorates. Other research-oriented doctorates (some having a professional practice focus) include the Doctor of Education (EdD), the Doctor of Science (DSc or ScD),Doctor of Arts (DA), Doctor of Juridical Science (JSD or SJD), Doctor of Musical Arts (DMA), Doctor of Professional Studies/Professional Doctorate (ProfDoc or DProf), Doctor of Public Health (DrPH), Doctor of Social Science (DSSc or DSocSci), Doctor of Management (DM, DMan or DMgt), Doctor of Business Administration (DBA), Doctor of Engineering (DEng, DESc, DES or EngD) the German engineering doctorate Doktoringenieur (Dr.-Ing.), natural science doctorate Doctor rerum naturalium (Dr. rer. nat.), and economics and social science doctorate Doctor rerum politicarum (Dr. rer. pol.). The UK Doctor of Medicine (MD or MD (Res)) and Doctor of Dental Surgery (DDS) are research doctorates. The Doctor of Theology (ThD or DTh), Doctor of Practical Theology (DPT) and the Doctor of Sacred Theology (STD, or DSTh) are research doctorates in theology.
Criteria for research doctorates vary but typically require completion of a substantial body of original research, which may be presented as a single thesis or dissertation, or as a portfolio of shorter project reports (thesis by publication). The submitted dissertation is assessed by a committee of, typically, internal, and external examiners. It is then typically defended by the candidate during an oral examination (called viva (voce) in the UK and India) by the committee, which then awards the degree unconditionally, awards the degree conditionally (ranging from corrections in grammar to additional research), or denies the degree. Candidates may also be required to complete graduate-level courses in their field and study research methodology.
Criteria for admission to doctoral programs vary. Students may be admitted with a bachelor's degree in the US and the UK However, elsewhere, e.g. in Finland and many other European countries, a master's degree is required. The time required to complete a research doctorate varies from three years, excluding undergraduate study, to six years or more.
=== Licentiate ===
Licentiate degrees vary widely in their meaning, and in a few countries are doctoral-level qualifications. Sweden awards the licentiate degree as a two-year qualification at the doctoral level and the doctoral degree (PhD) as a four-year qualification. Sweden originally abolished the Licentiate in 1969 but reintroduced it in response to demands from business. Finland also has a two-year doctoral level licentiate degree, similar to Sweden's. Outside of Scandinavia, the licentiate is usually a lower-level qualification. In Belgium, the licentiate was the basic university degree prior to the Bologna Process and was equivalent to a bachelor's degree. In France and other countries, it is the bachelor's-level qualification in the Bologna process. In the Pontifical system, the Licentiate in Sacred Theology (STL) is equivalent to an advanced master's degree, or the post-master's coursework required in preparation for a doctorate (i.e. similar in level to the Swedish/Finnish licentiate degree). While other licences (such as the Licence in Canon Law) are at the level of master's degrees.
=== Higher doctorate and post-doctoral degrees ===
A higher tier of research doctorates may be awarded based on a formally submitted portfolio of published research of an exceptionally high standard. Examples include the Doctor of Science (DSc or ScD), Doctor of Divinity (DD), Doctor of Letters (DLitt or LittD), Doctor of Law or Laws (LLD), and Doctor of Civil Law (DCL) degrees found in the UK, Ireland and some Commonwealth countries, and the traditional doctorates in Scandinavia like the Doctor Medicinae (Dr. Med.).
The habilitation teaching qualification (facultas docendi or "faculty to teach") under a university procedure with a thesis and an exam is commonly regarded as belonging to this category in Germany, Austria, France, Liechtenstein, Switzerland, Poland, etc. The degree developed in Germany in the 19th century "when holding a doctorate seemed no longer sufficient to guarantee a proficient transfer of knowledge to the next generation". In many federal states of Germany, the habilitation results in an award of a formal "Dr. habil." degree or the holder of the degree may add "habil." to their research doctorate such as "Dr. phil. habil." or "Dr. rer. nat. habil." In some European universities, especially in German-speaking countries, the degree is insufficient to have teaching duties without professor supervision (or teaching and supervising PhD students independently) without an additional teaching title such as Privatdozent. In Austria, the habilitation bestows the graduate with the facultas docendi, venia legendi. Since 2004, the honorary title of "Privatdozent" (before this, completing the habilitation resulted in appointment as a civil servant). In many Central and Eastern Europe countries, the degree gives venia legendi, Latin for "the permission to lecture", or ius docendi, "the right to teach", a specific academic subject at universities for a lifetime. The French academic system used to have a higher doctorate, called the "state doctorate" (doctorat d'État), but, in 1984, it was superseded by the habilitation (Habilitation à diriger des recherches, "habilitation to supervise (doctoral and post-doctoral) research", abbreviated HDR) which is the prerequisite to supervise PhDs and to apply to Full Professorships. In many countries of the previous Soviet Union (USSR), for example the Russian Federation or Ukraine there is the higher doctorate (above the title of "Candidate of Sciences"/PhD) under the title "Doctor of Sciences".
While this section has focused on earned qualifications conferred in virtue of published work or the equivalent, a higher doctorate may also be presented on an honorary basis by a university — at its own initiative or after a nomination — in recognition of public prestige, institutional service, philanthropy, or professional achievement. In a formal listing of qualifications, and often in other contexts, an honorary higher doctorate will be identified using language like "DCL, honoris causa", "Hon LLD", or "LittD h.c.".
=== Professional doctorate ===
Depending on the country, professional doctorates may also be research degrees at the same level as PhDs. The relationship between research and practice is considered important and professional degrees with little or no research content are typically aimed at professional performance. Many professional doctorates are named "Doctor of [subject name] and abbreviated using the form "D[subject abbreviation]" or "[subject abbreviation]D", or may use the more generic titles "Professional Doctorate", abbreviated "ProfDoc" or "DProf", "Doctor of Professional Studies" (DPS) or "Doctor of Professional Practice" (DPP).
In the US, professional doctorates (formally "doctor's degree – professional practice" in government classifications) are defined by the US Department of Education's National Center for Educational Statistics as degrees that require a minimum of six years of university-level study (including any pre-professional bachelor's or associate degree) and meet the academic requirements for professional licensure in the discipline. The definition for a professional doctorate does not include a requirement for either a dissertation or study beyond master's level, in contrast to the definition for research doctorates ("doctor's degree – research/scholarship"). However, individual programs may have different requirements. There is also a category of "doctor's degree – other" for doctorates that do not fall into either the "professional practice" or "research/scholarship" categories. All of these are considered doctoral degrees.
In contrast to the US, many countries reserve the term "doctorate" for research degrees. If, as in Canada and Australia, professional degrees bear the name "Doctor of ...", etc., it is made clear that these are not doctorates. Examples of this include Doctor of Pharmacy (PharmD), Doctor of Medicine (MD), Doctor of Dental Surgery (DDS), Doctor of Nursing Practice (DNP), and Juris Doctor (JD). Contrariwise, for example, research doctorates like Doctor of Business Administration (DBA), Doctor of Education (EdD) and Doctor of Social Science (DSS) qualify as full academic doctorates in Canada though they normally incorporate aspects of professional practice in addition to a full dissertation. In the Philippines, the University of the Philippines Open University offers a Doctor of Communication (DComm) professional doctorate.
All doctorates in the UK and Ireland are third cycle qualifications in the Bologna Process, comparable to US research doctorates. Although all doctorates are research degrees, professional doctorates normally include taught components, while the name PhD/DPhil is normally used for doctorates purely by thesis. Professional, practitioner, or practice-based doctorates such as the DClinPsy, MD, DHSc, EdD, DBA, EngD and DAg are full academic doctorates. They are at the same level as the PhD in the national qualifications frameworks; they are not first professional degrees but are "often post-experience qualifications" in which practice is considered important in the research context. In 2009 there were 308 professional doctorate programs in the UK, up from 109 in 1998, with the most popular being the EdD (38 institutions), DBA (33), EngD/DEng (22), MD/DM (21), and DClinPsy/DClinPsych/ClinPsyD (17). Similarly, in Australia, the term "professional doctorate" is sometimes applied to the Scientiae Juridicae Doctor (SJD), which, like the UK professional doctorates, is a research degree.
=== Honorary doctorate ===
When a university wishes to formally recognize an individual's contributions to a particular field or philanthropic efforts, it may choose to grant a doctoral degree honoris causa ('for the sake of the honor'), waiving the usual requirements for granting the degree. Some universities do not award honorary degrees, for example, Cornell University, the University of Virginia, and Massachusetts Institute of Technology.
== National variations ==
=== Argentina ===
In Argentina the doctorate (doctorado) is the highest academic degree. The intention is that candidates produce original contributions in their field knowledge within a frame of academic excellence. A dissertation or thesis is prepared under the supervision of a tutor or director. It is reviewed by a Doctoral Committee composed of examiners external to the program and at least one examiner external to the institution. The degree is conferred after a successful dissertation defence. In 2006, there were approximately 2,151 postgraduate careers in the country, of which 14% were doctoral degrees. Doctoral programs in Argentina are overseen by the National Commission for University Evaluation and Accreditation, an agency in Argentina's Ministry of Education, Science and Technology.
=== Australia ===
The Australian Qualifications Framework (AQF) categorizes tertiary qualifications into ten levels that are numbered from one to ten in ascending order of complexity and depth. Of these qualification levels, six are for higher education qualifications and are numbered from five to ten. Doctoral degrees occupy the highest of these levels: level ten.: 63 All doctoral degrees involve research and this is a defining characteristic of them.: 63 There are three categories of doctoral degrees recognized by the AQF: research doctorates, professional doctorates and higher doctorates.: 63–64
Research doctorates and professional doctorates are both completed as part of a programme of study and supervised research.: 63 Both have entry requirements of the student having a supervisor that has agreed to supervise their research, along with the student possessing an honours degree with upper second-class honours or better or a master's degree with a substantial research component. Research doctorates are typically titled Doctor of Philosophy and they are awarded on the basis of an original and significant contribution to knowledge.: 63 Professional doctorates are typically titled Doctor of (field of study) and they are awarded on the basis of an original and significant contribution to professional practice.: 63
Higher doctorates are typically titled similarly to professional doctorates and are awarded based on a submitted portfolio of research that follows a consistent theme and is internationally recognized as an original and substantive contribution to knowledge beyond that required for the awarding of a research doctorate.: 64 Typically, to be eligible to be awarded a higher doctorate a student must have completed a research doctorate at least seven to ten years prior to submitting the research portfolio used to award them a higher doctorate.
=== Brazil ===
Doctoral candidates are normally required to have a master's degree in a related field. Exceptions are based on their individual academic merit. A second and a third foreign language are other common requirements, although the requirements regarding proficiency commonly are not strict. The admissions process varies by institution. Some require candidates to take tests while others base admissions on a research proposal application and interview only. In both instances however, a faculty member must agree prior to admission to supervise the applicant.
Requirements usually include satisfactory performance in advanced graduate courses, passing an oral qualifying exam and submitting a thesis that must represent an original and relevant contribution to existing knowledge. The thesis is examined in a final public oral exam administered by at least five faculty members, two of whom must be external. After completion, which normally consumes 4 years, the candidate is commonly awarded the degree of Doutor (Doctor) followed by the main area of specialization, e.g. Doutor em Direito (Doctor of Laws), Doutor em Ciências da Computação (Doctor of Computer Sciences), Doutor em Filosofia (Doctor of Philosophy), Doutor em Economia (Doctor of Economics), Doutor em Engenharia (Doctor of Engineering) or Doutor em Medicina (Doctor of Medicine). The generic title of Doutor em Ciências (Doctor of Sciences) is normally used to refer collectively to doctorates in the natural sciences (i.e. Physics, Chemistry, Biological and Life Sciences, etc.)
All graduate programs in Brazilian public universities are tuition-free (mandated by the Brazilian constitution). Some graduate students are additionally supported by institutional scholarships granted by federal government agencies like CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and CAPES (Coordenação de Aperfeiçoamento do Pessoal de Ensino Superior). Personal scholarships are provided by the various FAP's (Fundações de Amparo à Pesquisa) at the state level, especially FAPESP in the state of São Paulo, FAPERJ in the state of Rio de Janeiro and FAPEMIG in the state of Minas Gerais. Competition for graduate financial aid is intense and most scholarships support at most 2 years of Master's studies and 4 years of doctoral studies. The normal monthly stipend for doctoral students in Brazil is between US$500 and $1000.
A degree of Doutor usually enables an individual to apply for a junior faculty position equivalent to a US assistant professor. Progression to full professorship, known as Professor Titular requires that the candidate be successful in a competitive public exam and normally takes additional years. In the federal university system, doctors who are admitted as junior faculty members may progress (usually by seniority) to the rank of associate professor, then become eligible to take the competitive exam for vacant full professorships. In São Paulo state universities, associate professorships and subsequent eligibility to apply for a full professorship are conditioned on the qualification of Livre-docente and requires, in addition to a doctorate, a second thesis or cumulative portfolio of peer-reviewed publications, a public lecture before a panel of experts (including external members from other universities), and a written exam.
In recent years some initiatives as jointly supervised doctorates (e.g. "cotutelles") have become increasingly common in the country, as part of the country's efforts to open its universities to international students.
=== Denmark ===
Denmark offers two types of "doctorate"-like degrees:
A three-year ph.d. degree program, which replaced the equivalent licentiat in 1992, and does not grant the holder the right to the title dr. or doktor. At the same time, a minor, two-year research training program, leading to a title of "magister", was phased out to meet the international standards of the Bologna Process.
A 'full' doctor's degree (e.g. dr.phil., Doctor Philosophiae, for humanistic and STEM subjects) – the higher doctorate – introduced in 1479. The second part of the title communicates the field of study – e.g. dr.scient (in the sciences), dr.jur (in law), dr.theol (in theology).
For the ph.d. degree, the candidates (ph.d. students or fellows) – who are required to have a master's degree – enroll at a ph.d. school at a university and participate in a research training program, at the end of which they each submit a thesis and defend it orally at a formal disputation. In the disputation, the candidates defend their theses against three official opponents, and may take opponents or questions from those present in the auditorium (ex auditorio).
For the higher doctorate, the candidate (referred to as præses) is required to submit a thesis of major scientific significance, and to proceed to defend it orally against two official opponents, as well as against any and all opponents from the auditorium (ex auditorio) – no matter how long the proceedings take. The official opponents are required to be full professors. The candidate is required to have a master's degree, but not necessarily a ph.d.
The ph.d. was introduced as a separate title from the higher doctorate in 1992 as part of the transition to a new degree structure, since the changes in the degree system would otherwise leave a significant amount of academics without immediately recognizable qualifications in international settings. The original vision was purported to be to phase out the higher doctorate in favor of the ph.d. (or merge the two), but so far, there are no signs of this happening. Many Danish academics with permanent positions wrote ph.d. dissertations in the 90s when the system was new, since at that time, a ph.d. degree or equivalent qualifications began to be required for certain academic positions in Denmark. Until the late 20th century, the higher doctorate was a condition for attaining full professorship; it is no longer required per se for any positions, but is considered amply equivalent to the ph.d. when applying for academic positions.
=== Egypt ===
In Egypt, the highest degree doctorate is awarded by Al-Azhar University est. 970, which grants ( العالمية Ālimiyya\ Habilitation).
The Medical doctorate (abbreviated as M.D.) is equivalent to the Ph.D. degree. To earn an M.D. in a science specialty, one must have a master's degree (M.Sc.) (or two diplomas before the introduction of M.Sc. degree in Egypt) before applying. The M.D. degree involves courses in the field and defending a dissertation. It takes on average three to five years.
Many postgraduate medical and surgical specialties students earn a doctorate. After finishing a 6-year medical school and one-year internship (house officer), physicians and surgeons earn the M.B. B.Ch. degree, which is equivalent to a US MD degree. They can then apply to earn a master's degree or a speciality diploma, then an MD degree in a specialty.
The Egyptian M.D. degree is written using the name of one's specialty. For example, M.D. (Geriatrics) means a doctorate in Geriatrics, which is equivalent to a Ph.D. in Geriatrics.
=== Finland ===
The Finnish requirement for the entrance into doctoral studies is a master's degree or equivalent. All universities have the right to award doctorates. The ammattikorkeakoulu institutes (institutes of higher vocational education that are not universities but often called "Universities of Applied Sciences" in English) do not award doctoral or other academic degrees. The student must:
Demonstrate understanding of their field and its meaning, while preparing to use scientific or scholarly study in their field, creating new knowledge.
Obtain a good understanding of development, basic problems and research methods
Obtain such understanding of the general theory of science and letters and such knowledge of neighbouring research fields that they are able to follow the development of these fields.
The way to show that these general requirements have been met is:
Complete graduate coursework.
Demonstrate critical and independent thought
Prepare and publicly defend a dissertation (a monograph or a compilation thesis of peer-reviewed articles). In fine arts, the dissertation may be substituted by works and/or performances as accepted by the faculty.
Entrance to a doctoral program is available only for holders of a master's degree; there is no honors procedure for recruiting Bachelors. Entrance is not as controlled as in undergraduate studies, where a strict numerus clausus is applied. Usually, a prospective student discusses their plans with a professor. If the professor agrees to accept the student, the student applies for admission. The professor may recruit students to their group. Formal acceptance does not imply funding. The student must obtain funding either by working in a research unit or through private scholarships. Funding is more available for natural and engineering sciences than in letters. Sometimes, normal work and research activity are combined.
Prior to introduction of the Bologna process, Finland required at least 42 credit weeks (1,800 hours) of formal coursework. The requirement was removed in 2005, leaving the decision to individual universities, which may delegate the authority to faculties or individual professors. In Engineering and Science, required coursework varies between 40 and 70 ECTS.
The duration of graduate studies varies. It is possible to graduate three years after the master's degree, while much longer periods are not uncommon. The study ends with a dissertation, which must present substantial new scientific/scholarly knowledge. The dissertation can either be a monograph or it an edited collection of 3 to 7 journal articles. Students unable or unwilling to write a dissertation may qualify for a licentiate degree by completing the coursework requirement and writing a shorter thesis, usually summarizing one year of research.
When the dissertation is ready, the faculty names two expert pre-examiners with doctoral degrees from the outside the university. During the pre-examination process, the student may receive comments on the work and respond with modifications. After the pre-examiners approve, the doctoral candidate applies the faculty for permission to print the thesis. When granting this permission, the faculty names the opponent for the thesis defence, who must also be an outside expert, with at least a doctorate. In all Finnish universities, long tradition requires that the printed dissertation hang on a cord by a public university noticeboard for at least ten days prior to for the dissertation defence.
The doctoral dissertation takes place in public. The opponent and the candidate conduct a formal debate, usually wearing white tie, under the supervision of the thesis supervisor. Family, friends, colleagues and the members of the research community customarily attend the defence. After a formal entrance, the candidate begins with an approximately 20-minute popular lecture (lectio praecursoria), that is meant to introduce laymen to the thesis topic. The opponent follows with a short talk on the topic, after which the pair critically discuss the dissertation. The proceedings take two to three hours. At the end the opponent presents their final statement and reveals whether he/she will recommend that the faculty accept it. Any member of the public then has an opportunity to raise questions, although this is rare. Immediately after the defence, the supervisor, the opponent and the candidate drink coffee with the public. Usually, the attendees of the defence are given the printed dissertation. In the evening, the passed candidate hosts a dinner (Finnish: karonkka) in honour of the opponent. Usually, the candidate invites their family, colleagues and collaborators.
Doctoral graduates are often Doctors of Philosophy (filosofian tohtori), but many fields retain their traditional titles: Doctor of Medicine (lääketieteen tohtori), Doctor of Science in technology (tekniikan tohtori), Doctor of Science in arts (Art and Design), etc.
The doctorate is a formal requirement for a docenture or professor's position, although these in practice require postdoctoral research and further experience. Exceptions may be granted by the university governing board, but this is uncommon, and usually due to other work and expertise considered equivalent.
=== France ===
History
Before 1984 three research doctorates existed in France: the State doctorate (doctorat d'État, "DrE", the old doctorate introduced in 1808), the third cycle doctorate (doctorat de troisième cycle, also called doctorate of specialty, doctorat de spécialité, created in 1954 and shorter than the State doctorate) and the diploma of doctor-engineer (diplôme de docteur-ingénieur created in 1923), for technical research.
During the first half of the 20th century, following the submission of two theses (primary thesis, thèse principale, and secondary thesis, thèse complémentaire) to the Faculty of Letters (in France, "letters" is equivalent to "humanities") at the University of Paris, the doctoral candidate was awarded the Doctorat ès lettres. There was also the less prestigious "university doctorate", doctorat d'université, which could be received for the submission of a single thesis.
In the 1950s, the Doctorat ès lettres was renamed to Doctorat d'État. In 1954 (for the sciences) and 1958 (for letters and human sciences), the less demanding doctorat de troisième cycle degree was created on the model of the American Ph.D. with the purpose to lessen what had become an increasingly long period of time between the typical students' completion of their Diplôme d'études supérieures, roughly equivalent to a Master of Arts, and their Doctorat d'État.
After 1984, only one type of doctoral degree remained: the "doctorate" (Doctorat). A special diploma was created called the "Habilitation to Supervise Research" (also translated as "accreditation to supervise research"; Habilitation à diriger des recherches), a professional qualification to supervise doctoral work. (This diploma is similar in spirit to the older State doctorate, and the requirements for obtaining it are similar to those necessary to obtain tenure in other systems.) Before only professors or senior full researchers of similar rank were normally authorized to supervise a doctoral candidate's work. Now habilitation is a prerequisite to the title of professor in university (Professeur des universités) and to the title of Research Director (Directeur de recherche) in national public research agency such as CNRS, INRIA, or INRAE.
Admission
Today, the doctorate (doctorat) is a research-only degree. It is a national degree and its requirements are fixed by the minister of higher education and research. Only public institutions award the doctorate. It can be awarded in any field of study. The master's degree is a prerequisite. The normal duration is three years. The writing of a comprehensive thesis constitutes the bulk of the doctoral work. While the length of the thesis varies according to the discipline, it is rarely less than 150 pages, and often substantially more. Some 15,000 new doctoral matriculations occur every year and ≈10,000 doctorates are awarded.
Doctoral candidates can apply for a three-year fellowship. The most well known is the Contrat Doctoral (4,000 granted every year with a gross salary of 1758 euros per month as of September 2016).
Since 2002, candidates follow in-service training, but there is no written examination for the doctorate. The candidate has to write a thesis that is read by two external reviewers. The head of the institution decides whether the candidate can defend the thesis, after considering the external reviews. The jury members are designated by the head of the institution. The candidate's supervisor and the external reviewers are generally jury members. The maximum number of jury members is 8. The defense generally lasts 45 minutes in scientific fields, followed by 1 – 2+1⁄2 hours of questions from the jury or other doctors present. The defense and questions are public. The jury then deliberates in private and then declares the candidate admitted or "postponed". The latter is rare. New regulations were set in 2016 and do not award distinctions.
The title of doctor (docteur) can also be used by medical and pharmaceutical practitioners who hold a doctor's State diploma (diplôme d'État de docteur, distinct from the doctorat d'État mentioned above). The diploma is a first-degree.
A guideline with good practices and legal analysis has been published in 2018 by the Association nationale des docteurs (ANDès) and the Confédération des Jeunes Chercheurs (CJC) with funding from the French Ministry of research.
=== Germany ===
Doctoral degrees in Germany are research doctorates and are awarded by a process called Promotion. Most doctorates are awarded with specific Latin designations for the field of research (except for engineering, where the designation is German), instead of a general name for all fields (such as the Ph.D.). The most important degrees are:
Dr. theol. (theologiae; theology);
Dr. phil. (philosophiae; humanities such as philosophy, philology, history, and social sciences such as sociology, political science, or psychology as well);
Dr. rer. nat. (rerum naturalium; natural and formal sciences, i.e. physics, chemistry, biology, mathematics, computer science and information technology, or psychology);
Dr. iur. (iuris; law);
Dr. med. (medicinae; medicine);
Dr. med. dent. (medicinae dentariae; dentistry);
Dr. med. vet. (medicinae veterinariae; veterinary medicine);
Dr.-Ing. (engineering);
Dr. oec. (oeconomiae; economics);
Dr. rer. pol. (rerum politicarum; economics, business administration, political science).
The concept of a US-style professional doctorate as an entry-level professional qualification does not exist. Professional doctorates obtained in other countries, not requiring a thesis or not being third cycle qualifications under the Bologna process, can only be used postnominally, e.g., "Max Mustermann, MD", and do not allow the use of the title Dr.
In medicine, "doctoral" dissertations are often written alongside undergraduate study therefore, European Research Council decided in 2010 that such Dr. med. degrees do not meet the international standards of a Ph.D. research degree. The duration of the doctorate depends on the field: a doctorate in medicine may take less than a full-time year to complete; those in other fields, two to six years.
Over fifty doctoral designations exist, many of them rare or no longer in use. As a title, the degree is commonly written in front of the name in abbreviated form, e.g., Dr. rer. nat. Max Mustermann or Dr. Max Mustermann, dropping the designation entirely. However, leaving out the designation is only allowed when the doctorate degree is not an honorary doctorate, which must be indicated by Dr. h.c. (from Latin honoris causa). Although the honorific does not become part of the name, holders can demand that the title appear in official documents. The title is not mandatory. The honorific is commonly used in formal letters. For holders of other titles, only the highest title is mentioned. In contrast to English, in which a person's name is preceded by at most one title (except in very ceremonious usage), the formal German mode of address permits several titles in addition to "Herr" or "Frau" (which, unlike "Mr" or "Ms", is not considered a title at all, but an Anrede or "address"), including repetitions in the case of multiple degrees, as in "Frau Prof. Dr. Dr. Schmidt", for a person who would be addressed as "Prof. Schmidt" in English.
In the German university system it is common to write two doctoral theses, the inaugural thesis (Inauguraldissertation), completing a course of study, and the habilitation thesis (Habilitationsschrift), which opens the road to a professorship. Upon completion of the habilitation thesis, a Habilitation is awarded, which is indicated by appending habil. (habilitata/habilitatus) to the doctorate, e.g., Dr. rer. nat. habil. Max Mustermann. It is considered as an additional academic qualification rather than an academic degree formally. It qualifies the owner to teach at German universities (facultas docendi). The holder of a Habilitation receives the authorization to teach a certain subject (venia legendi). This has been the traditional prerequisite for attaining Privatdozent (PD) and employment as a full university professor. With the introduction of Juniorprofessuren—around 2005—as an alternative track towards becoming a professor at universities (with tenure), Habilitation is no longer the only university career track.
=== India ===
In India, doctorates are offered by universities. Entry requirements include master's degree. Some universities consider undergraduate degrees in professional areas such as engineering, medicine or law as qualifications for pursuing doctorate level degrees. Entrance examinations are held for almost all programs. In most universities, coursework duration and thesis is 3–7 years. The most common doctoral degree is Ph.D.
=== Italy ===
Until the introduction of the dottorato di ricerca in the mid-1980s, the laurea generally constituted the highest academic degree obtainable in Italy. The first institution in Italy to create a doctoral program was Scuola Normale Superiore di Pisa in 1927 under the historic name "Diploma di Perfezionamento". Further, the dottorato di ricerca was introduced by law and presidential decree in 1980, in a reform of academic teaching, training and experimentation in organisation and teaching methods.
Italy uses a three-level degree system following the Bologna Process. The first-level degree, called a laurea (Bachelor's degree), requires three years and a short thesis. The second-level degree, called a laurea magistrale (Master's degree), is obtained after two additional years, specializing in a branch of the field. This degree requires more advanced thesis work, usually involving academic research or an internship. The final degree is called a dottorato di ricerca (Ph.D.) and is obtained after three years of academic research on the subject and a thesis.
Alternatively, after obtaining the laurea or the laurea magistrale, one can complete a "Master's" (first-level Master's after the laurea; second-level Master's after the laurea magistrale) of one or two years, usually including an internship. An Italian "Master's" is not the same as a master's degree; it is intended to be more focused on professional training and practical experience.
Regardless of the field of study, the title for Bachelors Graduate students is Dottore/Dottoressa (abbrev. Dott./Dott.ssa, or as Dr.), not to be confused with the title for the Ph.D., which is instead Dottore/Dottoressa di Ricerca. A laurea magistrale grants instead the title of Dottore/Dottoressa magistrale. Graduates in the fields of Education, Art and Music are also called Dr. Prof. (or simply Professore) or Maestro. Many professional titles, such as ingegnere (engineer) are awarded only upon passing a post-graduation examination (esame di stato), and registration in the relevant professional association.
The Superior Graduate Schools in Italy (Italian: Scuola Superiore Universitaria), also called Schools of Excellence (Italian: Scuole di Eccellenza) such as Scuola Normale Superiore di Pisa and Sant'Anna School of Advanced Studies keep their historical "Diploma di Perfezionamento" title by law and MIUR Decree.
=== Japan ===
==== Dissertation-only ====
Until the 1990s, most natural science and engineering doctorates in Japan were earned by industrial researchers in Japanese companies. These degrees were awarded by the employees' former university, usually after years of research in industrial laboratories. The only requirement is submission of a dissertation, along with articles published in well-known journals. This program is called ronbun hakase (論文博士). It produced the majority of engineering doctoral degrees from national universities. University-based doctoral programs called katei hakase (課程博士), are gradually replacing these degrees. By 1994, more doctoral engineering degrees were earned for research within university laboratories (53%) than industrial research laboratories (47%). Since 1978, Japan Society for the Promotion of Science (JSPS) has provided tutorial and financial support for promising researchers in Asia and Africa. The program is called JSPS RONPAKU.
==== Professional degree ====
The only professional doctorate in Japan is the Juris Doctor, known as Hōmu Hakushi (法務博士) The program generally lasts two or three years. This curriculum is professionally oriented, but unlike in the US the program does not provide education sufficient for a law license. All candidates for a bar license must pass the bar exam (Shihou shiken), attend the Legal Training and Research Institute and pass the practical exam (Nikai Shiken or Shihou Shushusei koushi).
=== Netherlands and Flanders ===
The traditional academic system of the Netherlands provided basic academic diploma: propaedeuse and three academic degrees: kandidaat (the lowest degree), depending on gender doctorandus or doctoranda (drs.) (with equivalent degrees in engineering – ir. and law – mr.) and doctor (dr.). After successful completion of the first year of university, the student was awarded the propaedeutic diploma (not a degree). In some fields, this diploma was abolished in the 1980s. In physics and mathematics, the student could directly obtain a kandidaats (candidate) diploma in two years. The candidate diploma was all but abolished by 1989. It used to be attained after completion of the majority of courses of the academic study (usually after completion of course requirements of the third year in the program), after which the student was allowed to begin work on their doctorandus thesis. The successful completion of this thesis conveyed the doctoranda/us title, implying that the student's initial studies were finished. In addition to these 'general' degrees, specific titles equivalent to the doctorandus degree were awarded for law: meester (master) (mr.), and for engineering: ingenieur (engineer)(ir.). Following the Bologna protocol the Dutch adopted the Anglo-Saxon system of academic degrees. The old candidate's degree was revived to become the bachelor's degree and the doctorandus' (mr and ir degree) were replaced by master's degrees.
Students can only enroll in a doctorate system after completing a research university level master's degree; although dispensation can be granted on a case-by-case basis after scrutiny of the individual's portfolio. The most common way to conduct doctoral studies is to work as promovendus/assistent in opleiding (aio)/onderzoeker in opleiding (oio) (research assistant with additional courses and supervision), perform extensive research and write a dissertation consisting of published articles (over a period of four or more years). Research can also be conducted without official research assistant status, for example through a business-sponsored research laboratory.
The doctor's title is the highest academic title in the Netherlands and Flanders. In research doctorates the degree is always Ph.D. or dr. with no distinction between disciplines, and can only be granted by research universities.
==== Netherlands ====
Every Ph.D. thesis has to be promoted by research university staff member holding ius promovendi (the right to promote). In the Netherlands all full professors have ius promovendi, as well as other academic staff granted this right on individual basis by the board of their university (almost always senior associate professors). The promotor has the role of principal advisor and determines whether the thesis quality suffices and can be submitted to the examining committee. The examining committee is appointed by the academic board of the university based on recommendation of the promotor and consists of experts in the field. The examining committee reviews the thesis manuscript and has to approve or fail the thesis. Failures at this stage are rare because promotors generally not submit work they deem inadequate to the examining committee, supervisors and promotor lose prestige among their colleagues should they allow a substandard thesis to be submitted.
After examining committee approval, the candidate publishes the thesis (generally more than 100 copies) and sends it to the examining committee, colleagues, friends and family with an invitation to the public defence. Additional copies are kept in the university library and the Royal Library of the Netherlands.
The degree is awarded in a formal, public, defence session, in which the thesis is defended against critical questions of the "opposition" (the examining committee). Specific formalities differ between universities, for example whether a public presentation is given, either before or during the session, specific phrasing in the procedure, and dress code. In most protocols, candidates can be supported by paranymphs, a largely ceremonial role, but they are formally allowed to take over the defence on behalf of the candidate. Doctoral candidates The actual defence lasts exactly the assigned time slot (45 minutes to 1 hour exactly depending on the university) after which the defence is suspended by the bedel who stops the examination, frequently mid sentence. Failure during this session is possible, but extremely rare. After formal approval of the thesis and the defence by the examining committee in a closed discussion, the session is resumed and the promotor grants the degree and hands over the diploma to the candidate, and usually congratulates the candidate and gives a personal speech praising the work of the young doctor (laudatio), before the session is formally closed.
Dutch doctors may use PhD behind their name instead of the uncapitalized dr. before their name. Those who obtained a degree in a foreign country can only use one of the Dutch title dr. if their grade is approved as equivalent by the Dienst Uitvoering Onderwijs though according to the opportunity principle, little effort is spent in identifying such fraud.
Those who have multiple doctor (dr.) titles may use the title dr.mult. Those who have received honoris causa doctorates may use dr.h.c. before their own name.
The Dutch universities of technology (Eindhoven University of Technology, Delft University of Technology, University of Twente, and Wageningen University) also award a 2-year (industry oriented) Professional Doctorate in Engineering (PDEng), renamed EngD from September 2022 onwards, which does not grant the right to use the dr. title abbreviation. In 2023, a pilot started at universities of applied sciences with a professional doctoral programme, in which the focus is on applying knowledge to improve or solve professional processes or products.
==== Flanders ====
In Belgium's Flemish Community the doctorandus title was only used by those who actually started their doctoral work. Doctorandus is still used as a synonym for a Ph.D. student. The licentiaat (licensee) title was in use for a regular graduate until the Bologna reform changed the licentiaat degree to the master's degree (the Bologna reform abolished the two-year kandidaat degree and introduced a three-year academic bachelor's degree instead).
=== Poland ===
In Poland, an academic degree of doktor 'doctor' is awarded in sciences and arts upon an examination and defence of a doctoral dissertation. As Poland is a signatory to the Bologna Process, doctoral studies are a third cycle of studies following the bachelor's (licencjat) and master's (magister) degrees or their equivalents. Doctoral student is known as doktorant (masculine form) or doktorantka (feminine form). Doctorate is awarded within specified brach and discipline of science or art by university or research institute accredited by the minister responsible for higher education. The title is abbreviated to dr in nominative case.
Doctors may further go a habilitation process.
=== Russia ===
Introduced in 1819 in the Russian Empire, the academic title Doctor of the Sciences (Russian: Доктор наук) marks the highest academic level achievable by a formal process.
The title was abolished with the end of the Empire in 1917 and revived by the USSR in 1934 along with a new (lower) complementary degree of a Candidate [Doctor] of the Sciences' (Russian: Кандидат наук). This system is used since with minor adjustments.
The Candidate of the Sciences title is usually seen as roughly equivalent to the research doctorates in Western countries while the Doctor of the Sciences title is relatively rare and retains its exclusivity. Most "Candidates" never reach the "Doctor of the Sciences" title.
Similar title systems were adopted by many of the Soviet bloc countries.
=== Spain ===
Doctoral degrees are regulated by Royal Decree (R.D. 778/1998), Real Decreto (in Spanish). They are granted by the university on behalf of the king. Its diploma has the force of a public document. The Ministry of Science keeps a national registry of theses called TESEO. According to the National Institute of Statistics (INE), fewer than 5% of M.Sc. degree holders are admitted to Ph.D. programmes.
All doctoral programs are research-oriented. A minimum of 4 years of study is required, divided into 2 stages:
A 2-year (or longer) period of studies concludes with a public dissertation presented to a panel of 3 Professors. Upon approval from the university, the candidate receives a Diploma de Estudios Avanzados (part qualified doctor, equivalent to M.Sc.). From 2008 it is possible to substitute the former diploma by a recognized master program.
A 2-year (or longer) research period includes extensions for up to 10 years. The student must present a thesis describing a discovery or original contribution. If approved by their thesis director, the study is presented to a panel of 5 distinguished scholars. Any Doctor attending the public defense is allowed to challenge the candidate with questions. If approved, the candidate receives the doctorate. Four marks used to be granted: Unsatisfactory (Suspenso), Pass (Aprobado), Remarkable (Notable), "Cum laude" (Sobresaliente), and "Summa cum laude" (Sobresaliente Cum Laude). Those Doctors granted their degree "Summa Cum Laude" were allowed to apply for an "Extraordinary Award".
Since September 2012 and regulated by Royal Decree (R.D. 99/2011) (in Spanish), three marks can be granted: Unsatisfactory (No apto), Pass (Apto) and "Cum laude" (Apto Cum Laude) as maximum mark. In the public defense the doctor is notified if the thesis has passed or not passed. The Apto Cum Laude mark is awarded after the public defense as the result of a private, anonymous vote. Votes are verified by the university. A unanimous vote of the reviewers nominates Doctors granted Apto Cum Laude for an "Extraordinary Award" (Premio Extraordinario de Doctorado).
In the same Royal Decree the initial 3-year study period was replaced by a Research master's degree (one or two years; Professional master's degrees do not grant direct access to Ph.D. Programs) that concludes with a public dissertation called Trabajo de Fin de Máster or Proyecto de Fin de Máster. An approved project earns a master's degree that grants access to a Ph.D. program and initiates the period of research.
A doctorate is required in order to teach at the university. Some universities offer an online Ph.D. model.
Only Ph.D. holders, Grandees and Dukes can sit and cover their heads in the presence of the King.
From 1857, Complutense University was the only one in Spain authorised to confer the doctorate. This law remained in effect until 1954, when the University of Salamanca joined in commemoration of its septcentenary. In 1970, the right was extended to all Spanish universities.
All doctorate holders are reciprocally recognised as equivalent in Germany and Spain (according to the "Bonn Agreement of November 14, 1994").
=== United Kingdom ===
==== History of the UK doctorate ====
The doctorate has long existed in the UK as, originally, the second degree in divinity, law, medicine and music. But it was not until the late 19th century that the research doctorate, now known as the higher doctorate, was introduced. The first higher doctorate was the Doctor of Science at Durham University, introduced in 1882. This was soon followed by other universities, including the University of Cambridge establishing its ScD in the same year, the University of London transforming its DSc from an advanced study course to a research degree in 1885, and the University of Oxford establishing its Doctor of Letters (DLitt) in 1900.
The PhD was adopted in the UK following a joint decision in 1917 by British universities, although it took much longer for it to become established. Oxford became the first university to institute the new degree, although naming it the DPhil. The PhD was often distinguished from the earlier higher doctorates by distinctive academic dress. At Cambridge, for example, PhDs wear a master's gown with scarlet facings rather than the full scarlet gown of the higher doctors, while the University of Wales gave PhDs crimson gowns rather than scarlet. Professional doctorates were introduced in Britain in the 1980s and 1990s. The earliest professional doctorates were in the social sciences, including the Doctor of Business Administration (DBA), Doctor of Education (EdD) and Doctor of Clinical Psychology (DClinPsy).
==== British doctorates today ====
Today, except for those awarded honoris causa (honorary degrees), all doctorates granted by British universities are research doctorates, in that their main (and in many cases only) component is the submission of an extensive and substantial thesis or portfolio of original research, examined by an expert panel appointed by the university. UK doctorates are categorised as:
Doctorates
Subject specialist research – normally PhD/DPhil; the most common form of doctorate
Integrated subject specialist doctorates – integrated PhDs including teaching at master's level
Doctorates by publication – PhD by Published Works; only awarded infrequently
Professional / practice-based / practitioner doctorates – e.g. EdD, ProfDoc/DProf, EngD, etc.; usually include taught elements and have an orientation that combines professional and academic aspects
Higher doctorates
e.g. DD, LLD, DSc, DLitt; higher level than doctorates, usually awarded either for a substantial body of work over an extended period or as honorary degrees.
The Quality Assurance Agency states in the Framework for Higher Education Qualifications of UK Degree-Awarding Bodies (which covers doctorates but not higher doctorates) that:
Doctoral degrees are awarded to students who have demonstrated:
the creation and interpretation of new knowledge, through original research or other advanced scholarship, of a quality to satisfy peer review, extend the forefront of the discipline, and merit publication
a systematic acquisition and understanding of a substantial body of knowledge which is at the forefront of an academic discipline or area of professional practice
the general ability to conceptualise, design and implement a project for the generation of new knowledge, applications or understanding at the forefront of the discipline, and to adjust the project design in the light of unforeseen problems
a detailed understanding of applicable techniques for research and advanced academic enquiry
In the UK, the doctorate is a qualification awarded at FHEQ level 8/level 12 of the FQHEIS on the national qualifications frameworks. The higher doctorates are stated to be "A higher level of award", which is not covered by the qualifications frameworks.
==== Subject specialist doctorates ====
These are the most common doctorates in the UK and are normally awarded as PhDs. While the master/apprentice model was traditionally used for British PhDs, since 2003 courses have become more structured, with students taking courses in research skills and receiving training for professional and personal development. However, the assessment of the PhD remains based on the production of a thesis or equivalent and its defence at a viva voce oral examination, normally held in front of at least two examiners, one internal and one external. Access to PhDs normally requires an upper second class or first class bachelor's degree, or a master's degree. Courses normally last three years, although it is common for students to be initially registered for MPhil degrees and then formally transferred onto the PhD after a year or two. Students who are not considered likely to complete a PhD may be offered the opportunity to complete an MPhil instead.
Integrated doctorates, originally known as 'New Route PhDs', were introduced from 2000 onwards. These integrate teaching at master's level during the first one or two years of the degree, either alongside research or as a preliminary to starting research. These courses usually offer a master's-level exit degree after the taught courses are completed. While passing the taught elements is often required, examination of the final doctorate is still by thesis (or equivalent) alone. The duration of integrated doctorates is a minimum of four years, with three years spent on the research component.
In 2013, Research Councils UK issued a 'Statement of Expectations for Postgraduate Training', which lays out the expectations for training in PhDs funded by the research councils. In the latest version (2016), issued together with Cancer Research UK, the Wellcome Trust and the British Heart Foundation, these include the provision of careers advice, in-depth advanced training in the subject area, provision of transferable skills, training in experimental design and statistics, training in good research conduct, and training for compliance with legal, ethical and professional frameworks. The statement also encourages peer-group development through cohort training and/or Graduate schools.
==== Higher doctorates ====
Higher doctorates are awarded in recognition of a substantial body of original research undertaken over the course of many years. Typically the candidate submits a collection of previously published, peer-refereed work, which is reviewed by a committee of internal and external academics who decide whether the candidate deserves the doctorate. The higher doctorate is similar in some respects to the habilitation in some European countries. However, the purpose of the award is significantly different. While the habilitation formally determines whether an academic is suitably qualified to be a university professor, the higher doctorate does not qualify the holder for a position but rather recognises their contribution to research.
Higher doctorates were defined by the UK Council for Graduate Education (UKCGE) in 2013 as:
an award that is at a level above the PhD (or equivalent professional doctorate in the discipline), and that is typically gained not through a defined programme of study but rather by submission of a substantial body of research-based work.
In terms of number of institutions offering the awards, the most common doctorates of this type in UKCGE surveys carried out in 2008 and 2013 were the Doctor of Science (DSc), Doctor of Letters (DLitt), Doctor of Law (LLD), Doctor of Music (DMus) and Doctor of Divinity (DD); in the 2008 survey the Doctor of Technology (DTech) tied with the DD. The DSc was offered by all 49 responding institutions in 2008 and 15 out of 16 in 2013 and the DLitt by only one less in each case, while the DD was offered in 10 responding institutions in 2008 and 3 in 2013. In terms of number of higher doctorates awarded (not including honorary doctorates) the DSc was most popular, but the number of awards was very low: the responding institutions had averaged an award of at most one earned higher doctorate per year over the period 2003–2013.
==== Honorary degrees ====
Most British universities award degrees honoris causa to recognise individuals who have made a substantial contribution to a particular field. Usually an appropriate higher doctorate is used in these circumstances, depending on the candidate's achievements. However, some universities differentiate between honorary and substantive doctorates, using the degree of Doctor of the University (D.Univ.) for these purposes, and reserve the higher doctorates for formal academic research.
=== United States ===
U.S. research doctorates are awarded for advanced study followed by successfully completing and defending independent research presented in the form of a dissertation. Professional degrees may use the term "doctor" in their titles, such as Juris Doctor and Doctor of Medicine, but these degrees rarely contain an independent research component and are not research doctorates. Law school graduates, although awarded the J.D. degree, are not normally addressed as "doctor". In legal studies, the Doctor of Juridical Science is considered the equivalent to a Ph.D.
Many American universities offer the PhD followed by a professional doctorate or joint PhD with a professional degree. Often, PhD work is sequential to the professional degree, e.g., PhD in law after a JD or equivalent in physical therapy after DPT, in pharmacy after Pharm.D. Such professional degrees are referred to as an entry-level doctorate program and Ph.D. as a post-professional doctorate.
==== Research degrees ====
The most common research doctorate in the United States is the Doctor of Philosophy (Ph.D.). This degree was first awarded in the U.S. at the 1861 Yale University commencement. The University of Pennsylvania followed in 1871, with Cornell University (1872), Harvard (1873), Michigan (1876) and Princeton (1879) following suit. Controversy and opposition followed the introduction of the Ph.D. into the U.S. educational system, lasting into the 1950s, as it was seen as an unnecessary artificial transplant from a foreign (Germany) educational system, which corrupted a system based on England's Oxbridge model.
Ph.D.s and other research doctorates in the U.S. typically entail successful completion of coursework, passing a comprehensive examination, and defending a dissertation.
The median number of years for completion of U.S. doctoral degrees is seven. Doctoral applicants were previously required to have a master's degree, but many programs accept students immediately following undergraduate studies. Many programs gauge the potential of applicants to their program and grant a master's degree upon completion of the necessary course work. When so admitted, the student is expected to have mastered the material covered in the master's degree despite not holding one, though this tradition is under heavy criticism. Successfully finishing Ph.D. qualifying exams confers Ph.D. candidate status, allowing dissertation work to begin.
The International Affairs Office of the U.S. Department of Education has listed 18 frequently awarded research doctorate titles identified by the National Science Foundation (NSF) as representing degrees equivalent in research content to the Ph.D.
==== Professional degrees ====
Many fields offer professional doctorates (or professional master's degrees) such as engineering, pharmacy, medicine, etc., that require such degrees for professional practice or licensure. Some of these degrees are also termed "first professional degrees", since they are the first field-specific master's or doctoral degrees.
A Doctor of Engineering (DEng) is a professional degree. In contrast to a PhD in Engineering where students usually conduct original theory-based research, DEng degrees are built around applied coursework and a practice-led project and thus designed for working engineers in the industry. DEng students defend their thesis at the end of their study before a thesis committee in order to be conferred a degree.
A Doctor of Pharmacy is awarded as the professional degree in pharmacy replacing a bachelor's degree. It is the only professional pharmacy degree awarded in the US. Pharmacy programs vary in length between four years for matriculants with a B.S./B.A. to six years for others.
In the twenty-first century professional doctorates appeared in other fields, such as the Doctor of Audiology in 2007. Advanced Practice Registered Nurses were expected to completely transition to the Doctor of Nursing Practice by 2015, and physical therapists to the Doctor of Physical Therapy by 2020. Professional associations play a central role in this transformation amid criticisms on the lack of proper criteria to assure appropriate rigor. In many cases master's-level programs were relabeled as doctoral programs.
== Revocation ==
A doctoral degree can be revoked or rescinded by the university that awarded it. Possible reasons include plagiarism, criminal or unethical activities of the holder, or malfunction or manipulation of academic evaluation processes.
== See also ==
Postdoctoral researcher
Compilation thesis
Habilitation thesis
Doctor (title)
Eurodoctorate
List of fields of doctoral studies
== Notes ==
== References == | Wikipedia/Professional_doctorate |
A number of professional degrees in dentistry are offered by dental schools in various countries around the world.
== Degrees ==
Dental degrees may include:
=== Bachelor's degree ===
Bachelor of Dental Surgery (BDS)
Bachelor's degree of Dentistry (BDS)
Bachelor of Dentistry (BDent)
Bachelor of Dental Science (BDSc)
Bachelor of Science in Dentistry (BScD)
Bachelor of Medicine in Dental Medicine (BM)
Baccalaureus Dentalis Chirurgiae (BChD)
=== Master's degree ===
Master of Science (MS or MSc)
Master of Science in Dentistry (MSD or MScD)
Master of Medical Science (MMSc)
Master of Dentistry (MDent)
Master of Dental Surgery (MDS)
Master of Dental Science (MDentSci)
Master of Stomatology (MS)
Master of Clinical Stomatology (MCS)
Master of Stomatological Medicine (MSM)
=== Doctorate ===
Doctor of Dental Surgery (DDS)
Doctor of Dental Medicine/Doctor of Medicine in Dentistry (DMD)
Doctor of Clinical Dentistry (DClinDent)
Doctor of Dental Science (DDSc)
Doctor of Science in Dentistry (DScD)
Doctor of Medical Science (DMSc)
Doctor of Dentistry (DDent)
Doctor of Philosophy in Dentistry (PhD)
== Certificates and fellowships ==
=== Certificates ===
In some universities, especially in the United States, some postgraduate programs award certificates only.
Diploma in Dentistry (SMF)
Certificate, GPR/AEGD/Orofacial Pain
Certificate, Anesthesiology/Oral & Maxillofacial Pathology/Endodontics/Prosthodontics/Periodontics/Orthodontics/Dental Public Health/Pediatric Dentistry/OMS (American Dental Association – recognized specialty programs)
Certificate, DMTD
=== Commonwealth post-nominals ===
In commonwealth countries, the Royal Colleges of Dentistry (or Faculty of Dentistry of the college) awards post-nominals upon completion of a series of examinations.
Fellow of the Medical College in Dental Surgery (FMCDS), the National Postgraduate Medical College of Nigeria (NPMCN)
Fellow of the West African College of Surgeons (FWACS), the West African College of Surgeons
Fellow of Dental Surgery of the Royal College of Surgeons (FDSRCS)
Membership in the Faculty of Dental Surgery of the Royal College of Physicians and Surgeons of Glasgow [MFDS RCPS (Glasg)]
Membership in the Faculty of Dental Surgery of the Royal College Surgeons (MFDS RCS)
Fellow of Royal Australasian College of Dental Surgeons (FRACDS)
Membership in the Royal Australasian College of Dental Surgeons (MRACDS)
Membership in Orthodontics, Royal College of Surgeons (MOrth RCS)
Fellow of the Royal College of Dentists of Canada (FRCD(C))
Member of Royal College of Dentists of Canada (MRCD(C))
Fellow of the College of Dental Surgeons of Hong Kong (FCDSHK)
Member of the College of Dental Surgeons of Hong Kong (MCDSHK)
Fellow College of Physician and Surgeons, Bangladesh (FCPS)
Fellow College of Physician and Surgeons, Pakistan (FCPS)
License of Dental Surgery, Royal College of Surgeons (L.D.S. (Eng.))
Fellowship in Sports Dentistry /Fellow Sports Dentistry (FSD)
In the U.S., most dental specialists attain Board Certification (Diplomate Status) by completing a series of written and oral examinations with the appropriate Boards. e.g. Diplomate, American Board of Periodontics.
Each fully qualifies the holder to practice dentistry in at least the jurisdiction in which the degree was presented, assuming local and federal government licensure requirements are met.
== Oceania ==
=== Australia ===
Australia has nine dental schools:
University of Sydney, NSW
Charles Sturt University, NSW*
Griffith University, QLD*
University of Queensland, QLD
James Cook University, QLD*
University of Adelaide, SA
La Trobe University, VIC*
University of Melbourne, VIC
University of Western Australia, WA
(*) indicates new university dental programs that have opened up to aim at increasing the number of rural dental students entering and to return to rural practice. Traditional "sandstone" universities have been Sydney, Melbourne, Queensland, Adelaide and Western Australia.
Sydney (as of 2001), Melbourne (as of 2010) and Western Australia (as of 2013) have switched to 4-year graduate program that require a previous bachelor's degree for admission.
Postgraduate training is available in all dental specialties. Degrees awarded used to be Master of Dental Surgery/Science (MDS/MDSc), but lately have changed to Doctorate in Clinical Dentistry (DClinDent).
=== New Zealand ===
New Zealand has only one dental school:
University of Otago, Dunedin
The Faculty of Dentistry awards Bachelor of Dental Surgery (BDS) and Master of Community Dentistry (MComDent) for public health & community dentistry, and Doctorate in Clinical Dentistry (DClinDent) for the other dental specialties.
The body responsible for registering dental practitioners is the Dental Council of New Zealand (DCNZ).
=== Trans Tasman mutual recognition ===
Both Australia and New Zealand recognize the educational and professional qualifications and grant professional licenses via reciprocity identical to the United States and Canada.
=== General Dental Council of the UK ===
The United Kingdom General Dental Council had been recognizing the Australian and New Zealand dental qualification as registrable degree until 2000. Graduates who have applied for dental license registration in the United Kingdom now have to sit the Overseas Registration Exam (ORE), a three-part examination.
=== Canadian registration ===
Australia and Canada have a reciprocal accreditation agreement, which allows graduates of Canadian or Australian dental schools to register in either country. However, this only applies to the graduates of 2011 class and does not apply to the previous years' graduates.
=== Royal Australasian College of Dental Surgeons ===
Royal Australasian College of Dental Surgeons (RACDS) is a postgraduate body that focuses on postgraduate training of general practitioners and specialist dentists. Additional postgraduate qualifications can be obtained through the college after the candidate has completed the Primary Examination (basic science examination in anatomy, histology, physiology, biochemistry, pathology and microbiology) and the Final Examination (clinical subjects in dentistry). After the successful completion of the examinations and meeting the college requirements, the candidate is awarded the title of Fellow of Royal Australasian College of Dental Surgeons (FRACDS). For the dental specialists, the exam pathway is similar (Primary Examinations) and then clinical/oral examinations just prior to completing the specialist training leads to the award of the title Member of Royal Australasian College of Dental Surgeons in Special Field Stream (MRACDS (SFS)). For the busy GP dentists, MRACDS in general stream is also available.
== Bangladesh ==
The graduation in Dentistry is named here as Bachelor of Dental Surgery (BDS) also have diploma in Dentistry. At present there are three universities that have medical faculty that offer dental degrees: The University of Dhaka, the University of Chittagong, the University of Rajshahi, and diplomas also by the state medical faculty. These public universities have dental colleges and hospitals that may be publicly or privately funded, that offer education for the degree.
At present, postgraduate degrees in specialized dentistry exist in main four clinical specialities:
Orthodontics and Dentofacial Orthopedics
Oral and Maxillofacial Surgery
Conservative Dentistry and Endodontics
Prosthodontics
== Canada ==
There are ten approved dental schools in Canada:
University of Toronto (1868) [D.D.S.]
McGill University (1905) [D.M.D.]
Université de Montréal (1905) [D.M.D]
Dalhousie University (1908) [D.D.S.]
University of Alberta (1923) [D.D.S.]
University of Manitoba (1958) [D.M.D.]
University of British Columbia (1964) [D.M.D.]
University of Western Ontario (1966) [D.D.S.]
University of Saskatchewan (1968) [D.M.D.]
Université Laval (1971) [D.M.D.]
Several universities in Canada offer the DDS degree, including the University of Toronto, the University of Western Ontario, the University of Alberta, and Dalhousie University, while the remaining Canadian dental schools offer the Doctor of Dental Medicine degree to their graduates.
Additional qualifications can be obtained through the Royal College of Dentists of Canada (RCDC), which administers examinations for qualified dental specialists as part of the dentistry profession in Canada. The current examinations are known as the National Dental Specialty Examination (NDSE). Successful completion may lead to Fellowship in the college (FRCD(C)) and may be used for provincial registration purposes.
Canada has a reciprocal accreditation agreement with Australia, Ireland, and the United States, which recognize the dental training of graduates of Canadian dental schools. Obtaining licensure to work in any of the three other countries often requires additional steps, such as successfully completing national board examinations and fulfilling requirements of local governing bodies.
== China ==
China has many universities teaching dental degrees both at undergraduate and postgraduate level. Chinese universities have adapted the programmes of American and European degrees. The undergraduate degree is Bachelor of Medicine with a major in stomatology or dental surgery, and the postgraduate degree is Master of Medicine in stomatology (口腔医学硕士). Recently, China has a new name for its master's degree as Master of Stomatological Medicine (MSM). The MSM has been offered by top class Chinese universities. This program includes a comprehensive syllabus to produce graduates with extensive knowledge in respective specialties, skills in clinical practice, and research potential. The other branches of dentistry remain the same as American universities.
== Finland ==
In Finland, education in dentistry is through a 5.5-year Licenciate of Dental Medicine (DMD or DDS) course, which is offered after high school graduation. Application is by a national combined dental and medical school entry examination. As of 2011, dentistry is provided by Faculties of Medicine in four universities:
University of Helsinki
University of Turku
University of Oulu
University of Eastern Finland, Kuopio Campus
The first phase of training begins with a unified two-year preclinical training for dentists and physicians. Problem-based learning (PBL) is employed depending on university. The third year-autumn consists of clinico, theoretical phase in pathology, genetics, radiology and public health, and is partially combined with physicians' second phase. Third-phase clinical training lasts for the remaining three years and includes periods of being on call at University Central Hospital Trauma Centre, Clinic of Oral and Maxillofacial Diseases, and the Children's clinic. Candidates who successfully complete the fourth year of training qualify for a paid summer rotation in a Community health center of their choice. Annual intake of dentists into Faculties of Medicine is a national total 160 students.
Doctor of Philosophy (PhD) research is strongly encouraged alongside postgraduate training, which is available in all four universities and lasts an additional 3–6 years. Starting in 2014, the University of Helsinki introduced a new doctoral training system. In this new system, all doctoral candidates belong to a doctoral programme within a doctoral school. FINDOS Helsinki – Doctoral Programme in Oral Sciences – is a programme in the Doctoral School in Health Sciences.
The 11 postgraduate programs are:
Clinical dentistry:
Periodontology
Pedodontology and Preventive Dentistry
Cariology and Endodontology
Prosthodontology and Stomatognathic Physiology
Diagnostic dentistry:
Oral and Maxillofacial Pathology
Oral and Maxillofacial Radiology
Oral and Maxillofacial Medicine
Oral Clinical Microbiology (starts in 2014)
Other:
Orthodontics
Oral and Maxillofacial Surgery
Oral Public Health
== India ==
In India, training in dentistry is through a five-year Bachelor of Dental Surgery (BDS) course, which includes four years of study followed by one year of internship. As of 2019, 310 colleges (40 run by the government and 292 in the private sector) were offering dental education. This amounts to an annual intake of 33,500 graduates.
The three-year, full-time postgraduate Master of Dental Surgery (MDS) is the highest degree in dentistry awarded in India, and its holders are bestowed as consultants in one of these specialties:
Prosthodontics (fixed, removable, maxillofacial, and implant prosthodontics)
Periodontics
Oral and maxillofacial surgery
Conservative dentistry and endodontics
Orthodontics and dentofacial orthopaedics
Oral pathology and microbiology
Community dentistry
Pedodontics and preventive dentistry
Oral medicine diagnosis and radiology
Master in Public Health Dentistry
== Israel ==
Israel has two dental schools, the Hebrew University-Hadassah School of Dental Medicine in Jerusalem, founded by the Alpha Omega fraternity and the Tel Aviv University School of Dental Medicine in Tel Aviv. The two schools have six-year program and grant the Doctor of Dental Medicine (DMD) degrees. In recent decades, students are eligible for the Bachelor of Medical Sciences (BMedSc) degree after the first three years of training.
== South Africa ==
Related: Medical education in South Africa.
Training in South Africa generally comprises the five-year Bachelor of Dental Surgery, followed by one year's compulsory medical service/internship. The country has five universities with dental faculties:
Until 2003, Stellenbosch University offered the BChD degree. In 2004, the dental faculties of the University of the Western Cape and Stellenbosch University merged and moved to the University of the Western Cape, which is currently the largest dental school in Africa.
Specialisation is through one of the universities as a Master of Dentistry, or through the College of Dentistry within the Colleges of Medicine of South Africa, with certifications offered in oral medicine and periodontics, orthodontics, and prosthodontics. Research degrees are the MSc(Dent) / MDS and PhD(Dent).
== United Kingdom and Ireland ==
Many universities award BDS degrees, including the University of Sheffield, the University of Bristol, Barts and the London School of Medicine and Dentistry, the University of Birmingham, the University of Liverpool, the University of Manchester, the University of Glasgow, the University of Dundee, the University of Aberdeen, King's College London, Cardiff University, Newcastle University, Queen's University Belfast, the University of Central Lancashire, and Peninsula College of Medicine and Dentistry.
In the Republic of Ireland, the University College Cork awards BDS degrees and Trinity College Dublin awards BDentSc degrees.
The University of Leeds awards BChD and MChD (Bachelor/Master of Dental Surgery) degrees.
The Royal College of Surgeons of England, Edinburgh, Glasgow, and Ireland award LDS (Licence/Licentiate in Dental Surgery) degrees.
== Nigeria ==
Many universities award BDS and a few BChD (Baccalaureus Chirurgiae Dental) degrees. In Nigeria, training in dentistry is through a six-year course, typically, three years of preclinical training followed by three years of clinical training after passing part I exams comprising anatomy, biochemistry, and physiology. This is followed by one year of internship or housemanship, after which graduates can go into clinical practice as general dentists. Some go on to specialty training by completing a residency program, to become hospital consultants.
As of 2022, 11 dental schools, were active, including two with partial accreditations. Fully accredited programs are at the University of Lagos, University of Ibadan, University of Benin, University of Port-Harcourt, University of Nigeria (Enugu), University of Maiduguri, Bayero University (Kano), Lagos State University, and Obafemi Awolowo University (Ile-Ife).
== United States ==
In the United States, at least three years of undergraduate education are required to be admitted to a dental school; however, most dental schools require at least a bachelor's degree. No particular course of study is required as an undergraduate other than completing the requisite "predental" courses, which generally includes one year of general biology, chemistry, organic chemistry, physics, English, and higher-level mathematics such as statistics and calculus. Some dental schools have requirements that go beyond the basic requirements such as psychology, sociology, biochemistry, anatomy, physiology, etc. The majority of predental students major in a science, but this is not required as some students elect to major in a nonscience-related field.
In addition to core prerequisites, the Dental Admission Test, a multiple-choice standardized examination, is also required for potential dental students. The DAT is usually taken during the spring semester of one's junior year. The vast majority of dental schools require an interview before admissions can be granted. The interview is designed to evaluate the motivation, character, and personality of the applicant.
For the 2009–2010 application cycle, 11,632 applicants applied for admission to dental schools in the United States. Just 4,067 were eventually accepted. The average dental school applicant entering the school year in 2009 had an overall GPA of 3.54 and a science GPA of 3.46. Additionally, their mean DAT Academic Average (AA) was 19.00, while their DAT Perceptual Ability Test (PAT) score was 19.40.
=== Dental education and training ===
Dental school is four academic years in duration and is similar in format to medical school: two years of basic medical and dental sciences, followed by two years of clinical training (with continued didactic coursework). Before graduating, every dental student must successfully complete the National Board Dental Examination Part I and II (commonly referred to as NBDE I & II). The NBDE Part I is usually taken at the end of the second year after the majority of the didactic courses have been completed. The NBDE Part I covers gross anatomy, biochemistry, physiology, microbiology, pathology, and dental anatomy and occlusion. The NBDE Part II is usually taken during winter of the last year of dental school and consists of operative dentistry, pharmacology, endodontics, periodontics, oral surgery, pain control, prosthodontics, orthodontics, pedodontics, oral pathology, and radiology. NBDE Part I scores are Pass/Fail since 2012.
Since the COVID-19 pandemic, nearly all jurisdictions now utilize the INBDE system.
After graduating, the vast majority of new dentists go directly into practice, while others enter a residency program. Some residency programs train dentists in advanced general dentistry such as General Practice Residencies and Advanced Education in General Dentistry Residencies, commonly referred to as GPR and AEGD. Most GPR and AEGD programs are one year in duration, but several are two years long or provide an optional second year. GPR programs are usually affiliated with a hospital and thus require the dentist to treat a wide variety of patients including trauma, critically ill, and medically compromised patients. Additionally, GPR programs require residents to rotate through various departments within the hospital, such as anesthesia, internal medicine, and emergency medicine, to name a few. AEGD programs are usually in a dental-school setting where the focus is treating complex cases in a comprehensive manner.
==== DDS vs DMD degree ====
In the United States, the Doctor of Dental Surgery and Doctor of Dental Medicine are terminal professional doctorates, which qualify a professional for licensure. The DDS and DMD degrees are considered equivalent. The American Dental Association specifies:
The DDS (Doctor of Dental Surgery) and DMD (Doctor of Dental Medicine) are the same degrees. They are awarded upon graduation from dental school to become a General Dentist. The majority of dental schools award the DDS degree; however, some award a DMD degree. The education and degrees are, in substance, the same.Harvard University was the first dental school to award the DMD degree. Harvard only grants degrees in Latin, and the Latin translation of Doctor of Dental Surgery, "Chirurgiae Dentium Doctoris", did not share the "DDS" initials of the English term. "The degree 'Scientiae Dentium Doctoris', which would leave the initials of DDS unchanged, was then considered, but was rejected on the ground that dentistry was not a science." (The word order in Latin is not fixed, only the inflections; "Scientiae Dentium Doctoris" = "Doctoris Dentium Scientiae".) A Latin scholar was consulted. It was finally decided that "Medicinae Doctoris" be modified with "Dentariae". This is how the DMD, or "Doctor Medicinae Dentariae" degree, was started. (The genitive inflection -is on "Doctoris" instead of the nominative "Doctor" simply reflects that the syntax on the diploma was "the degree of Doctor of Dental Medicine"; they are both correct.) The assertion that "dentistry was not a science" reflected the view that dental surgery was an art informed by science, not a science per se—notwithstanding that the scientific component of dentistry is today recognized in the Doctor of Dental Science (DDSc) degree.
Other dental schools made the switch to this notation, and in 1989, 23 of the 66 North American dental schools awarded the DMD. No meaningful difference exists between the DMD and DDS degrees, and all dentists must meet the same national and regional certification standards to practice.
Some other prominent dental schools that award the DMD degree are the University of Florida, Midwestern University-IL, Midwestern University-AZ, Medical University of South Carolina, Augusta University (formerly Medical College of Georgia), University of Connecticut, University of Alabama at Birmingham, University of Louisville, University of Puerto Rico, Rutgers University, Tufts University, Oregon Health and Sciences University, University of Pennsylvania, Case Western Reserve University, University of Illinois at Chicago, Boston University, Temple University, Western University of Health Sciences, University of Pittsburgh, University of Nevada, Las Vegas, and East Carolina University.
The United States Department of Education and the National Science Foundation do not include the DDS and DMD among the degrees that are equivalent to research doctorates.
=== Licensing examinations ===
To practice, a dentist must pass a licensing examination administered by an individual state or more commonly a region. A handful of states maintain independent dental licensing examinations, while the majority accept a regional board examination. The Northeast Regional Board (NERB), Western Regional Board (WREB), Central Regional Dental Testing Service (CRDTS), and Southern Regional Testing Agency (SRTA), Council of Interstate Testing Agencies (CITA) are the five regional testing agencies that administer licensing examinations. Once the examination is passed, the dentist may then apply to individual states that accept the regional board test passed. Each state requires prospective practitioners to pass an ethics/jurisprudence examination, as well, before a license is granted. To maintain one's dental license, the doctor must complete Continuing Dental Education (CDE) courses periodically (usually annually). This promotes the continued exploration of knowledge. The amount of CE required varies from state to state, but is generally 10-25 CE hours a year.
The completion of a dental degree can be followed by either an entrance into private practice, further postgraduate study and training, or research and academics.
=== Dental specialties in the United States ===
Twelve dental specialties are recognized in the United States. Becoming a specialist requires one to train in a residency or advanced graduate training program. Once residency is completed, the doctor is granted a certificate of specialty training. Many specialty programs have optional or required advanced degrees such as a master's degree: (MS, MSc, MDS, MSD, MDSc, MMSc, MPhil, or MDent), doctoral degree: (DClinDent, DChDent, DMSc, PhD), or medical degree: (MD/MBBS specific to maxillofacial surgery).
Anesthesiology: 3–4 years
Orthodontics: 2–3 years
Endodontics: 2–3 years
Oral and maxillofacial surgery: 4–6 years (additional time for MD/MBBS degree granting programs)
Periodontics: 3 years
Prosthodontics: 2–3 years
Maxillofacial prosthodontics 1 year (a prosthodontist may elect to sub-specialize in maxillofacial prosthodontics)
Oral and maxillofacial radiology: 3 years
Oral and maxillofacial pathology: 3–5 years
Oral medicine: 2–4 years
Orofacial pain: 1–3 years
Pediatric dentistry: 2–3 years
Dental public health: 3 years
The following are currently recognized as dental specialties in the US under the American Board of Dental Specialties (ABDS):
Oral medicine: 2–4 years
Orofacial pain: 1–3 years
Oral Implantology/Implant Dentistry: seven or more years of experience in the practice of implant dentistry and have completed at least 75 implant cases. Applicants must successfully complete both the Part I and Part II examination within four year
Dental Board of Anesthesiology: 3–4 years
The following are not currently recognized as dental specialties in the US.
Special needs dentistry: 3 years
Geriatric Dentistry: ranges from a weekend course to a 2-year masters course depending on the certificate issuing agency.
Cosmetic Dentistry – ranges from a weekend course to a 1-year course depending on the certificate issuing agency.
Dentists who have completed accredited specialty training programs in these fields are designated registrable (U.S. "Board Eligible") and warrant exclusive titles such as anesthesiologist, orthodontist, oral and maxillofacial surgeon, endodontist, pedodontist, periodontist, or prosthodontist upon satisfying certain local (U.S. "Board Certified"), (Australia/NZ: "FRACDS"), or (Canada: "FRCD(C)") registry requirements.
== See also ==
American Student Dental Association
List of dental schools in the United States
== References ==
== External links ==
General Dental Council: UK primary dental qualifications
American Student Dental Association – Licensure by State | Wikipedia/Doctor_of_Dental_Surgery |
A Doctor of Philosophy (PhD, DPhil; Latin: philosophiae doctor or doctor in philosophia) is a terminal degree that usually denotes the highest level of academic achievement in a given discipline and is awarded following a course of graduate study and original research. The name of the degree is most often abbreviated PhD (or, at times, as Ph.D. in North America), pronounced as three separate letters ( PEE-aych-DEE). The University of Oxford uses the alternative abbreviation "DPhil".
PhDs are awarded for programs across the whole breadth of academic fields. Since it is an earned research degree, those studying for a PhD are required to produce original research that expands the boundaries of knowledge, normally in the form of a dissertation, and, in some cases, defend their work before a panel of other experts in the field. In many fields, the completion of a PhD is typically required for employment as a university professor, researcher, or scientist.
== Definition ==
In the context of the Doctor of Philosophy and other similarly titled degrees, the term "philosophy" does not refer to the field or academic discipline of philosophy, but is used in a broader sense in accordance with its original Greek meaning, which is "love of wisdom". In most of Europe, all fields (including history, philosophy, social sciences, mathematics, and natural philosophy – later known as natural science) other than theology, law, and medicine (the so-called professional, vocational, or technical curricula) were traditionally known as philosophy, and in Germany and elsewhere in Europe the basic faculty of liberal arts was known as the "faculty of philosophy".
A PhD candidate must submit a project, thesis, or dissertation often consisting of a body of original academic research, which is in principle worthy of publication in a peer-reviewed journal. In many countries, a candidate must defend this work before a panel of expert examiners appointed by the university. Universities sometimes award other types of doctorate besides the PhD, such as the Doctor of Musical Arts (DMA) for music performers, Doctor of Juridical Science (SJD) for legal scholars and the Doctor of Education (EdD) for studies in education. In 2005 the European University Association defined the "Salzburg Principles", 10 basic principles for third-cycle degrees (doctorates) within the Bologna Process. These were followed in 2016 by the "Florence Principles", seven basic principles for doctorates in the arts laid out by the European League of Institutes of the Arts, which have been endorsed by the European Association of Conservatoires, the International Association of Film and Television Schools, the International Association of Universities and Colleges of Art, Design and Media, and the Society for Artistic Research.The specific requirements to earn a PhD degree vary considerably according to the country, institution, and time period, from entry-level research degrees to higher doctorates. During the studies that lead to the degree, the student is called a doctoral student or PhD student; a student who has completed any necessary coursework and related examinations and is working on their thesis/dissertation is sometimes known as a doctoral candidate or PhD candidate. A student attaining this level may be granted a Candidate of Philosophy degree at some institutions or may be granted a master's degree en route to the doctoral degree. Sometimes this status is also colloquially known as "ABD", meaning "all but dissertation". PhD graduates may undertake a postdoc in the process of transitioning from study to academic tenure.
Individuals who have earned the Doctor of Philosophy degree use the title Doctor (often abbreviated "Dr" or "Dr."), although the etiquette associated with this usage may be subject to the professional ethics of the particular scholarly field, culture, or society. Those who teach at universities or work in academic, educational, or research fields are usually addressed by this title "professionally and socially in a salutation or conversation". Alternatively, holders may use post-nominal letters such as "Ph.D.", "PhD", or "DPhil", depending on the awarding institution. It is, however, traditionally considered incorrect to use both the title and post-nominals together, although usage in that regard has been evolving over time.
== History ==
=== Medieval and early modern Europe ===
In the universities of Medieval Europe, study was organized in four faculties: the basic faculty of arts, and the three higher faculties of theology, medicine, and laws (canon law and civil law). All of these faculties awarded intermediate degrees (bachelors of arts, theology, laws and medicine) and final degrees. Initially, the titles of master and doctor were used interchangeably for the final degrees—the title Doctor was merely a formality bestowed on a Teacher/Master of the art—but by the late Middle Ages the terms Master of Arts and Doctor of Theology/Divinity, Doctor of Law, and Doctor of Medicine had become standard in most places (though in the German and Italian universities the term Doctor was used for all faculties).
The doctorates in the higher faculties were quite different from the current PhD degree in that they were awarded for advanced scholarship, not original research. No dissertation or original work was required, only lengthy residency requirements and examinations. Besides these degrees, there was the licentiate. Originally this was a license to teach, awarded shortly before the award of the master's or doctoral degree by the diocese in which the university was located, but later it evolved into an academic degree in its own right, in particular in the continental universities.
According to Keith Allan Noble (1994), the first doctoral degree was awarded in medieval Paris around 1150. The doctorate of philosophy developed in Germany as the terminal teacher's credential in the 17th century (circa 1652). There were no PhDs in Germany before the 1650s (when they gradually started replacing the MA as the highest academic degree; arguably, one of the earliest German PhD holders is Erhard Weigel (Dr. phil. hab., Leipzig, 1652).
The full course of studies might, for example, lead in succession to the degrees of Bachelor of Arts, Licentiate of Arts, Master of Arts, or Bachelor of Medicine, Licentiate of Medicine, or Doctor of Medicine, but before the early modern era, many exceptions to this existed. Most students left the university without becoming masters of arts, whereas regulars (members of monastic orders) could skip the arts faculty entirely.
=== Educational reforms in Germany ===
This situation changed in the early 19th century through the educational reforms in Germany, most strongly embodied in the model of the University of Berlin, founded in 1810 and controlled by the Prussian government. The arts faculty, which in Germany was labelled the faculty of philosophy, started demanding contributions to research, attested by a dissertation, for the award of their final degree, which was labelled Doctor of Philosophy (abbreviated as Ph.D.)—originally this was just the German equivalent of the Master of Arts degree. Whereas in the Middle Ages the arts faculty had a set curriculum, based upon the trivium and the quadrivium, by the 19th century it had come to house all the courses of study in subjects now commonly referred to as sciences and humanities. Professors across the humanities and sciences focused on their advanced research. Practically all the funding came from the central government, and it could be cut off if the professor was politically unacceptable.
These reforms proved extremely successful, and fairly quickly the German universities started attracting foreign students, notably from the United States. The American students would go to Germany to obtain a PhD after having studied for a bachelor's degree at an American college. So influential was this practice that it was imported to the United States, where in 1861 Yale University started granting the PhD degree to younger students who, after having obtained the bachelor's degree, had completed a prescribed course of graduate study and successfully defended a thesis or dissertation containing original research in science or in the humanities. In Germany, the name of the doctorate was adapted after the philosophy faculty started being split up − e.g. Dr. rer. nat. for doctorates in the faculty of natural sciences − but in most of the English-speaking world the name "Doctor of Philosophy" was retained for research doctorates in all disciplines.
The PhD degree and similar awards spread across Europe in the 19th and early 20th centuries. The degree was introduced in France in 1808, replacing diplomas as the highest academic degree; into Russia in 1819, when the Doktor Nauk degree, roughly equivalent to a PhD, gradually started replacing the specialist diploma, roughly equivalent to the MA, as the highest academic degree; and in Italy in 1927, when PhDs gradually started replacing the Laurea as the highest academic degree.
=== History in the United Kingdom ===
Research degrees first appeared in the UK in the late 19th century in the shape of the Doctor of Science (DSc or ScD) and other such "higher doctorates". The University of London introduced the DSc in 1860, but as an advanced study course, following on directly from the BSc, rather than a research degree. The first higher doctorate in the modern sense was Durham University's DSc, introduced in 1882.
This was soon followed by other universities, including the University of Cambridge establishing its ScD in the same year and the University of London transforming its DSc into a research degree in 1885. These were, however, very advanced degrees, rather than research-training degrees at the PhD level. Harold Jeffreys said that getting a Cambridge ScD was "more or less equivalent to being proposed for the Royal Society."
In 1917, the current PhD degree was introduced, along the lines of the American and German model, and quickly became popular with both British and foreign students. The slightly older degrees of Doctor of Science and Doctor of Literature/Letters still exist at British universities; together with the much older degrees of Doctor of Divinity (DD), Doctor of Music (DMus), Doctor of Civil Law (DCL), and Doctor of Medicine (MD), they form the higher doctorates, but apart from honorary degrees, they are only infrequently awarded.
In English (but not Scottish) universities, the Faculty of Arts had become dominant by the early 19th century. Indeed, the higher faculties had largely atrophied, since medical training had shifted to teaching hospitals, the legal training for the common law system was provided by the Inns of Court (with some minor exceptions, see Doctors' Commons), and few students undertook formal study in theology. This contrasted with the situation in the continental European (and Scottish) universities at the time, where the preparatory role of the Faculty of Philosophy or Arts was to a great extent taken over by secondary education: in modern France, the Baccalauréat is the examination taken at the end of secondary studies. The reforms at the Humboldt University transformed the Faculty of Philosophy or Arts (and its more recent successors such as the Faculty of Sciences) from a lower faculty into one on a par with the Faculties of Law and Medicine.
Similar developments occurred in many other continental European universities, and at least until reforms in the early 21st century, many European countries (e.g., Belgium, Spain, and the Scandinavian countries) had in all faculties triple degree structures of bachelor (or candidate) − licentiate − doctor as opposed to bachelor − master − doctor; the meaning of the different degrees varied from country to country, however. To this day, this is also still the case for the pontifical degrees in theology and canon law; for instance, in sacred theology, the degrees are Bachelor of Sacred Theology (STB), Licentiate of Sacred Theology (STL), and Doctor of Sacred Theology (STD), and in canon law: Bachelor of Canon Law (JCB), Licentiate of Canon Law (JCL), and Doctor of Canon Law (JCD).
=== History in the United States ===
Until the mid-19th century, advanced degrees were not a criterion for professorships at most colleges. That began to change as the more ambitious scholars at major schools went to Germany for one to three years to obtain a PhD in the sciences or humanities. Graduate schools slowly emerged in the United States. In 1852, the first honorary PhD in the nation was given at Bucknell University in Lewisburg, Pennsylvania to Ebenezer Newton Elliott. Nine years later, in 1861, Yale University awarded three PhDs: to Eugene Schuyler in philosophy and psychology, Arthur Williams Wright in physics, and James Morris Whiton Jr. in classics.
Over the following two decades, Harvard University, New York University, Princeton University, and the University of Pennsylvania, also began granting the degree. Major shifts toward graduate education were foretold by the opening of Clark University in 1887 which offered only graduate programs and the Johns Hopkins University which focused on its PhD program. By the 1890s, Harvard, Columbia, Michigan and Wisconsin were building major graduate programs, whose alumni were hired by new research universities. By 1900, 300 PhDs were awarded annually, most of them by six universities. It was no longer necessary to study in Germany. However, half of the institutions awarding earned PhDs in 1899 were undergraduate institutions that granted the degree for work done away from campus. Degrees awarded by universities without legitimate PhD programs accounted for about a third of the 382 doctorates recorded by the US Department of Education in 1900, of which another 8–10% were honorary. The awarding of PhD as an honorary degree was banned by the Board of Regents of the University of the State of New York in 1897. This had a nation-wide impact, and after 1907, less than 10 honorary PhDs were awarded in the United States each year. The last authenticated PhD awarded honoris causa was awarded in 1937 to Bing Crosby by Gonzaga University.
At the start of the 20th century, U.S. universities were held in low regard internationally and many American students were still traveling to Europe for PhDs. The lack of centralised authority meant anyone could start a university and award PhDs. This led to the formation of the Association of American Universities by 14 leading research universities (producing nearly 90% of the approximately 250 legitimate research doctorates awarded in 1900), with one of the main goals being to "raise the opinion entertained abroad of our own Doctor's Degree."
In Germany, the national government funded the universities and the research programs of the leading professors. It was impossible for professors who were not approved by Berlin to train graduate students. In the United States, by contrast, private universities and state universities alike were independent of the federal government. Independence was high, but funding was low. The breakthrough came from private foundations, which began regularly supporting research in science and history; large corporations sometimes supported engineering programs. The postdoctoral fellowship was established by the Rockefeller Foundation in 1919. Meanwhile, the leading universities, in cooperation with the learned societies, set up a network of scholarly journals. "Publish or perish" became the formula for faculty advancement in the research universities. After World War II, state universities across the country expanded greatly in undergraduate enrollment, and eagerly added research programs leading to masters or doctorate degrees. Their graduate faculties had to have a suitable record of publication and research grants. Late in the 20th century, "publish or perish" became increasingly important in colleges and smaller universities.
== Requirements ==
Detailed requirements for the award of a PhD degree vary throughout the world and even from school to school. It is usually required for the student to hold an Honours degree or a Master's degree with high academic standing, in order to be considered for a PhD program. In the US, Canada, India, Sweden and Denmark, for example, many universities require coursework in addition to research for PhD degrees. In other countries (such as the UK) there is generally no such condition, though this varies by university and field. Some individual universities or departments specify additional requirements for students not already in possession of a bachelor's degree or equivalent or higher. In order to submit a successful PhD admission application, copies of academic transcripts, letters of recommendation, a research proposal, and a personal statement are often required. Most universities also invite for a special interview before admission.
A candidate must submit a project, thesis, or dissertation often consisting of a body of original academic research, which is in principle worthy of publication in a peer-reviewed context. Moreover, some PhD programs, especially in science, require one to three published articles in peer-reviewed journals. In many countries, a candidate must defend this work before a panel of expert examiners appointed by the university; this defense is open to the public in some countries, and held in private in others; in other countries, the dissertation is examined by a panel of expert examiners who stipulate whether the dissertation is in principle passable and any issues that need to be addressed before the dissertation can be passed.
Some universities in the non-English-speaking world have begun adopting similar standards to those of the anglophone PhD degree for their research doctorates (see the Bologna process).
A PhD student or candidate is conventionally required to study on campus under close supervision. With the popularity of distance education and e-learning technologies, some universities now accept students enrolled into a distance education part-time mode.
In a "sandwich PhD" program, PhD candidates do not spend their entire study period at the same university. Instead, the PhD candidates spend the first and last periods of the program at their home universities and in between conduct research at another institution or field research. Occasionally a "sandwich PhD" will be awarded by two universities.
It is possible to broaden the field of study pursued by a PhD student by the addition of a minor subject of study within a different discipline.
== Value and criticism ==
A career in academia generally requires a PhD, although in some countries it is possible to reach relatively high positions without a doctorate. In North America, professors are increasingly being required to have a PhD, and the percentage of faculty with a PhD may be used as a university ratings measure.
The motivation may also include increased salary, but in many cases, this is not the result. Research by Bernard H. Casey of the University of Warwick, U.K, suggests that, over all subjects, PhDs provide an earnings premium of 26% over non-accredited graduates, but notes that master's degrees already provide a premium of 23% and a bachelor's 14%. While this is a small return to the individual (or even an overall deficit when tuition and lost earnings during training are accounted for), he claims there are significant benefits to society for the extra research training.
However, some research suggests that overqualified workers are often less satisfied and less productive at their jobs. These difficulties are increasingly being felt by graduates of professional degrees, such as law school, looking to find employment. PhD students may need to take on debt to undertake their degree.
A PhD is also required in some positions outside academia, such as research jobs in major international agencies. In some cases, the executive directors of some types of foundations may be expected to hold a PhD. A PhD is sometimes felt to be a necessary qualification in certain areas of employment, such as in foreign policy think-tanks: U.S. News & World Report wrote in 2013 that "[i]f having a master's degree at the minimum is de rigueur in Washington's foreign policy world, it is no wonder many are starting to feel that the PhD is a necessary escalation, another case of costly signaling to potential employers". Similarly, an article on the Australian public service states that "credentialism in the public service is seeing a dramatic increase in the number of graduate positions going to PhDs and masters degrees becoming the base entry level qualification".
The Economist published an article in 2010 citing various criticisms against the state of PhDs. These included a prediction by economist Richard B. Freeman that, based on pre-2000 data, only 20% of life science PhD students would gain a faculty job in the U.S., and that in Canada 80% of postdoctoral research fellows earned less than or equal to an average construction worker ($38,600 a year). According to the article, only the fastest developing countries (e.g. China or Brazil) have a shortage of PhDs. In 2022, Nature reported that PhD students' wages in biological sciences in the US do not cover living costs.
The U.S. higher education system often offers little incentive to move students through PhD programs quickly and may even provide incentive to slow them down. To counter this problem, the United States introduced the Doctor of Arts degree in 1970 with seed money from the Carnegie Foundation for the Advancement of Teaching. The aim of the Doctor of Arts degree was to shorten the time needed to complete the degree by focusing on pedagogy over research, although the Doctor of Arts still contains a significant research component. Germany is one of the few nations engaging these issues, and it has been doing so by reconceptualising PhD programs to be training for careers, outside academia, but still at high-level positions. This development can be seen in the extensive number of PhD holders, typically from the fields of law, engineering, and economics, at the very top corporate and administrative positions. To a lesser extent, the UK research councils have tackled the issue by introducing, since 1992, the EngD.
Mark C. Taylor opined in 2011 in Nature that total reform of PhD programs in almost every field is necessary in the U.S. and that pressure to make the necessary changes will need to come from many sources (students, administrators, public and private sectors, etc.). Other articles in Nature have also examined the issue of PhD reform.
Freeman Dyson, professor emeritus at the Institute for Advanced Study in Princeton, was opposed to the PhD system and did not have a PhD degree. On the other hand, it was understood by all his peers that he was a world leading scientist with many accomplishments already under his belt during his graduate study years and he was eligible to gain the degree at any given moment.
== Degrees around the globe ==
The UNESCO, in its International Standard Classification of Education (ISCED), states that: "Programmes to be classified at ISCED level 8 are referred to in many ways around the world such as PhD, DPhil, D.Lit, D.Sc, LL.D, Doctorate or similar terms. However, it is important to note that programmes with a similar name to 'doctor' should only be included in ISCED level 8 if they satisfy the criteria described in Paragraph 263. For international comparability purposes, the term 'doctoral or equivalent' is used to label ISCED level 8."
=== National variations ===
In German-speaking nations, most Eastern European nations, successor states of the former Soviet Union, most parts of Africa, Asia, and many Spanish-speaking countries, the corresponding degree to a Doctor of Philosophy is simply called "Doctor" (Doktor), and the subject area is distinguished by a Latin suffix (e.g., "Dr. med." for Doctor medicinae, Doctor of Medicine; "Dr. rer. nat." for Doctor rerum naturalium, Doctor of the Natural Sciences; "Dr. phil." for Doctor philosophiae, Doctor of Philosophy; "Dr. iur." for Doctor iuris, Doctor of Laws).
=== Argentina ===
==== Admission ====
In Argentina, the admission to a PhD program at public Argentine University requires the full completion of a Master's degree or a Licentiate degree. Non-Argentine Master's titles are generally accepted into a PhD program when the degree comes from a recognized university.
==== Funding ====
While a significant portion of postgraduate students finance their tuition and living costs with teaching or research work at private and state-run institutions, international institutions, such as the Fulbright Program and the Organization of American States (OAS), have been known to grant full scholarships for tuition with apportions for housing.
Others apply for funds to CONICET, the national public body of scientific and technical research, which typically awards more than a thousand scholarships each year for this purpose, thus guaranteeing many PhD candidates remain within the system.
==== Requirements for completion ====
Upon completion of at least two years' research and coursework as a graduate student, a candidate must demonstrate truthful and original contributions to their specific field of knowledge within a frame of academic excellence. The doctoral candidate's work should be presented in a dissertation or thesis prepared under the supervision of a tutor or director and reviewed by a Doctoral Committee. This committee should be composed of examiners that are external to the program, and at least one of them should also be external to the institution. The academic degree of Doctor, respective to the correspondent field of science that the candidate has contributed with original and rigorous research, is received after a successful defense of the candidate's dissertation.
=== Australia ===
==== Admission ====
Admission to a PhD program in Australia requires applicants to demonstrate capacity to undertake research in the proposed field of study. The standard requirement is a bachelor honours degree with either first-class or upper second-class honours. Research master's degrees and coursework master's degrees with a 25% research component are usually considered equivalent. It is also possible for research master's degree students to "upgrade" to PhD candidature after demonstrating sufficient progress.
==== Scholarships ====
PhD students are sometimes offered a scholarship to study for their PhD degree. The most common of these was the government-funded Australian Postgraduate Award (APA) until its dissolution in 2017. It was replaced by Research Training Program (RTP), awarded to students of "exceptional research potential", which provides a living stipend to students of approximately A$34,000 a year (tax-free). RTPs are paid for a duration of 3 years, while a 6-month extension is usually possible upon citing delays out of the control of the student. Some universities also fund a similar scholarship that matches the APA amount. Due to a continual increase in living costs, many PhD students are forced to live under the poverty line. In addition to the more common RTP and university scholarships, Australian students have other sources of scholarship funding, coming from industry, private enterprise, and organisations.
==== Fees ====
Australian citizens, permanent residents, and New Zealand citizens are not charged course fees for their PhD or research master's degree, with the exception in some universities of the student services and amenities fee (SSAF) which is set by each university and typically involves the largest amount allowed by the Australian government. All fees are paid for by the Australian government, except for the SSAF, under the Research Training Program. International students and coursework master's degree students must pay course fees unless they receive a scholarship to cover them.
==== Requirements for completion ====
Completion requirements vary. Most Australian PhD programs do not have a required coursework component. The credit points attached to the degree are all in the product of the research, which is usually an 80,000-word thesis that makes a significant new contribution to the field. Recent pressure on higher degree by research (HDR) students to publish has resulted in increasing interest in Ph.D by publication as opposed to the more traditional Ph.D. by dissertation, which typically requires a minimum of two publications, but which also requires traditional thesis elements such as an introductory exegesis, and linking chapters between papers. The PhD thesis is sent to external examiners who are experts in the field of research and who have not been involved in the work. Examiners are nominated by the candidate's university, and their identities are often not revealed to the candidate until the examination is complete. A formal oral defence is generally not part of the examination of the thesis, largely because of the distances that would need to be travelled by the overseas examiners; however, since 2016, there is a trend toward implementing this in many Australian universities. At the University of South Australia, PhD candidates who started after January 2016 now undertake an oral defence via an online conference with two examiners.
=== Canada ===
==== Admission ====
Admission to a doctoral programme at a university in Canada typically requires completion of a Master's degree in a related field, with sufficiently high grades and proven research ability. In some cases, a student may progress directly from an Honours Bachelor's degree to a PhD program; other programs allow a student to fast-track to a doctoral program after one year of outstanding work in a Master's program (without having to complete the Master's).
An application package typically includes a research proposal, letters of reference, transcripts, and in some cases, a writing sample or Graduate Record Examinations scores. A common criterion for prospective PhD students is the comprehensive or qualifying examination, a process that often commences in the second year of a graduate program. Generally, successful completion of the qualifying exam permits continuance in the graduate program. Formats for this examination include oral examination by the student's faculty committee (or a separate qualifying committee), or written tests designed to demonstrate the student's knowledge in a specialized area (see below) or both.
At English-speaking universities, a student may also be required to demonstrate English language abilities, usually by achieving an acceptable score on a standard examination (for example the Test of English as a Foreign Language). Depending on the field, the student may also be required to demonstrate ability in one or more additional languages. A prospective student applying to French-speaking universities may also have to demonstrate some English language ability.
==== Funding ====
While some students work outside the university (or at student jobs within the university), in some programs students are advised (or must agree) not to devote more than ten hours per week to activities (e.g., employment) outside of their studies, particularly if they have been given funding. For large and prestigious scholarships, such as those from NSERC and Fonds québécois de la recherche sur la nature et les technologies, this is an absolute requirement.
At some Canadian universities, most PhD students receive an award equivalent to part or all of the tuition amount for the first four years (this is sometimes called a tuition deferral or tuition waiver). Other sources of funding include teaching assistantships and research assistantships; experience as a teaching assistant is encouraged but not requisite in many programs. Some programs may require all PhD candidates to teach, which may be done under the supervision of their supervisor or regular faculty. Besides these sources of funding, there are also various competitive scholarships, bursaries, and awards available, such as those offered by the federal government via NSERC, CIHR, or SSHRC.
==== Requirements for completion ====
In general, the first two years of study are devoted to completion of coursework and the comprehensive examinations. At this stage, the student is known as a "PhD student" or "doctoral student." It is usually expected that the student will have completed most of their required coursework by the end of this stage. Furthermore, it is usually required that by the end of eighteen to thirty-six months after the first registration, the student will have successfully completed the comprehensive exams.
Upon successful completion of the comprehensive exams, the student becomes known as a "PhD candidate." From this stage on, the bulk of the student's time will be devoted to their own research, culminating in the completion of a PhD thesis or dissertation. The final requirement is an oral defense of the thesis, which is open to the public in some, but not all, universities. At most Canadian universities, the time needed to complete a PhD degree typically ranges from four to six years. It is, however, not uncommon for students to be unable to complete all the requirements within six years, particularly given that funding packages often support students for only two to four years; many departments will allow program extensions at the discretion of the thesis supervisor or department chair. Alternative arrangements exist whereby a student is allowed to let their registration in the program lapse at the end of six years and re-register once the thesis is completed in draft form. The general rule is that graduate students are obligated to pay tuition until the initial thesis submission has been received by the thesis office. In other words, if a PhD student defers or delays the initial submission of their thesis they remain obligated to pay fees until such time that the thesis has been received in good standing.
=== China ===
In China, doctoral programs can be applied directly after obtaining a bachelor's degree or applied after obtaining a master's degree. Those who directly apply for a doctoral program after a bachelor's degree usually need four to five years to obtain a doctorate and will not be awarded a master's degree during the period.
The courses at the doctoral level are mainly completed in the first and second years, and the remaining years are spent doing experiments/research and writing papers. At most universities, the maximum duration of doctoral study is 7 years. If a doctoral student does not complete their degree within 7 years, it is likely that they can only obtain a study certificate without any degree.
China has thirteen statutory types of academic degrees, which also apply to doctorate degrees. Despite the naming difference, all these thirteen types of doctoral degrees are research and academic degrees that are equivalent to PhD degrees. These thirteen doctorates are:
Doctor of Philosophy (for the discipline of philosophy)
Doctor of Economics
Doctor of Law
Doctor of Education
Doctor of Literature
Doctor of History
Doctor of Science
Doctor of Engineering
Doctor of Agriculture
Doctor of Medicine (equivalent to a PhD in medical sciences)
Doctor of Military
Doctor of Management
Doctor of Fine Arts.
In international academic communication, Chinese doctoral degree recipients sometimes translate their doctorate degree names to PhD in Discipline (such as PhD in Engineering, Computer Science) to facilitate peer understanding.
=== Colombia ===
==== Admission ====
In Colombia, the PhD course admission may require a master's degree (Magíster) in some universities, specially public universities. However, it could also be applied for a direct doctorate in specific cases, according to the jury's recommendations on the thesis proposal.
==== Funding ====
Most of postgraduate students in Colombia must finance their tuition fees by means of teaching assistant seats or research works. Some institutions such as Colciencias, Colfuturo, CeiBA, and Icetex grant scholarships or provide awards in the form of forgivable loans.
==== Requirements for completion ====
After two or two and a half years, it is expected that the research work of the doctoral candidate be submitted in the form of oral qualification, where suggestions and corrections about the research hypothesis and methodology, as well as on the course of the research work, are performed. The PhD degree is only received after a successful defense of the candidate's thesis is performed (four or five years after the enrollment), most of the time also requiring the most important results having been published in at least one peer-reviewed high-impact international journal.
=== Finland ===
In Finland, the degree of filosofian tohtori (abbreviated FT) is awarded by traditional universities, such as University of Helsinki. A Master's degree is required, and the doctorate combines approximately 4–5 years of research (amounting to 3–5 scientific articles, some of which must be first-author) and 60 ECTS points of studies. Other universities such as Aalto University award degrees such as tekniikan tohtori (TkT, engineering), taiteen tohtori (TaT, art), etc., which are translated in English to Doctor of Science (D.Sc.), and they are formally equivalent. The licentiate (filosofian lisensiaatti or FL) requires only 2–3 years of research and is sometimes done before an FT.
=== France ===
==== History ====
Before 1984 three research doctorates existed in France: the State doctorate (doctorat d'État, the old doctorate introduced in 1808), the third cycle doctorate (doctorat de troisième cycle, created in 1954 and shorter than the State doctorate) and the diploma of doctor-engineer (diplôme de docteur-ingénieur created in 1923), for technical research. After 1984, only one type of doctoral degree remained, called "doctorate" (Doctorat). The latter is equivalent to the PhD.
==== Admission ====
Students pursuing the PhD degree must first complete a master's degree program, which takes two years after graduation with a bachelor's degree (five years in total). The candidate must apply to a doctoral research project associated with a doctoral advisor (Directeur de thèse or directeur doctoral) with a habilitation throughout the doctoral program.
The PhD admission is granted by a graduate school (in French, "école doctorale"). A PhD candidate may follow some in-service training offered by the graduate school while continuing their research in a laboratory. Their research may be carried out in a laboratory, at a university or in a company. In the first case, the candidates can be hired by the university or a research organisation. In the last case, the company hires the candidate and they are supervised by both the company's tutor and a lab's professor. Completion of the PhD degree generally requires 3 years after the master's degree but it can last longer in specific cases.
==== Funding ====
The financing of PhD research comes mainly from funds for research of the French Ministry of Higher Education and Research. The most common procedure is a short-term employment contract called doctoral contract: the institution of higher education is the employer and the PhD candidate the employee. However, the candidate can apply for funds from a company, which can host them at its premises (as in the case where PhD candidates do their research at a company). In another possible situation, the company and the institute can sign a funding agreement together so that the candidate still has a public doctoral contract but is works at the company on a daily basis (for example, this is particularly the case for the (French) Scientific Cooperation Foundation). Many other resources come from some regional/city projects, some associations, etc.
=== Germany ===
==== Admission ====
In Germany, admission to a doctoral program is generally on the basis of having an advanced degree (i.e., a master's degree, diplom, magister, or staatsexamen), mostly in a related field and having above-average grades. A candidate must also find a tenured professor from a university to serve as the formal advisor and supervisor (Betreuer) of the dissertation throughout the doctoral program. This supervisor is informally referred to as Doktorvater or Doktormutter, which literally translate to "doctor's father" and "doctor's mother" respectively. The formal admission is the beginning of the so-called Promotionsverfahren, while the final granting of the degree is called Promotion.
The duration of the doctorate depends on the field. A doctorate in medicine may take less than a full-time year to complete; those in other fields, two to six years. Most doctorates are awarded with specific Latin designations for the field of research (except for engineering, where the designation is German), instead of a general name for all fields (such as the Ph.D.). The most important degrees are:
Dr. rer. nat. (rerum naturalium; natural and formal sciences, i.e. physics, chemistry, biology, mathematics, computer science and information technology, or psychology);
Dr. phil. (philosophiae; humanities such as philosophy, philology, history, and social sciences such as sociology, political science, or psychology as well);
Dr. iur. (iuris; law);
Dr. oec. (oeconomiae; economics);
Dr. rer. pol. (rerum politicarum; economics, business administration, political science);
Dr. theol. (theologiae; theology);
Dr. med. (medicinae; medicine);
Dr. med. dent. (medicinae dentariae; dentistry);
Dr. med. vet. (medicinae veterinariae; veterinary medicine);
Dr. rer. med. (rerum medicarum; medical science; a researcher, not a physician);
Dr.-Ing. (engineering).
Over fifty such designations exist, many of them rare or no longer in use. As a title, the degree is commonly written in front of the name in abbreviated form, e.g., Dr. rer. nat. Max Mustermann or Dr. Max Mustermann, dropping the designation entirely. However, leaving out the designation is only allowed when the doctorate degree is not an honorary doctorate, which must be indicated by Dr. h.c. (from Latin honoris causa).
While most German doctorates are considered equivalent to the PhD, an exception is the medical doctorate, where "doctoral" dissertations are often written alongside undergraduate study. The European Research Council decided in 2010 that those doctorates do not meet the international standards of a PhD research degree. There are different forms of university-level institution in Germany, but only professors from "Universities" (Univ.-Prof.) can serve as doctoral supervisors – "Universities of Applied Sciences" (Fachhochschulen) are not entitled to award doctorates, although some exceptions apply to this rule.
==== Structure ====
Depending on the university, doctoral students (Doktoranden) can be required to attend formal classes or lectures, some of them also including exams or other scientific assignments, in order to get one or more certificates of qualification (Qualifikationsnachweise). Depending on the doctoral regulations (Promotionsordnung) of the university and sometimes on the status of the doctoral student, such certificates may not be required. Usually, former students, research assistants or lecturers from the same university, may be spared from attending extra classes. Instead, under the tutelage of a single professor or advisory committee, they are expected to conduct independent research. In addition to doctoral studies, many doctoral candidates work as teaching assistants, research assistants, or lecturers.
Many universities have established research-intensive Graduiertenkollegs ("graduate colleges"), which are graduate schools that provide funding for doctoral studies.
==== Duration ====
The typical duration of a doctoral program can depend heavily on the subject and area of research. Usually, three to five years of full-time research work are required. The average time to graduation is 4.5 years.
In 2014, the median age of new PhD graduates was 30.4 years.
=== India ===
In India, a master's degree is usually required to gain admission to a doctoral program. Direct admission to a PhD program after graduating to BTech may also be granted by the IITs, the IIITs, the NITs, and the Academy of Scientific and Innovative Research. In some subjects, completing a Master of Philosophy (MPhil) is a prerequisite to obtaining funding/fellowship for a PhD.
According to new rules prescribed by the UGC, universities must conduct Research Eligibility Tests in ability and the selected subject. After clearing these tests, the shortlisted candidates are required to appear for an interview with the available PhD supervisor and give presentations of their research proposal (plan of work or synopsis). During study, candidates must submit progress reports and after successful completion of the coursework, are required to give a pre-submission presentation and finally defend their thesis in an open defense viva-voce. It is mandatory in India to qualify for the National Eligibility Test to apply for a professorship, lectureship or Junior Research Fellowship (NET for LS and JRF) conducted by the National Testing Agency (NTA).
=== Italy ===
==== History ====
The Dottorato di ricerca (research doctorate), abbreviated to "Dott. Ric." or "PhD", is an academic title awarded at the end of a course of not less than three years, admission to which is based on entrance examinations and academic rankings in the Bachelor of Arts ("Laurea", a three-year diploma) and Master of Arts ("Laurea Magistrale" a two-year diploma). While the standard PhD follows the Bologna process, the MD–PhD programme may be completed in two years.
The first institution in Italy to create a doctoral program (PhD) was Scuola Normale Superiore di Pisa in 1927 under the historic name "Diploma di Perfezionamento".
Further, the research doctorates or PhD (Dottorato di ricerca) in Italy were introduced by law and Presidential Decree in 1980, referring to the reform of academic teaching, training and experimentation in organisation and teaching methods.
The Superior Graduate Schools in Italy (Scuola Superiore Universitaria), also called Schools of Excellence (Scuole di Eccellenza) such as Scuola Normale Superiore di Pisa and Sant'Anna School of Advanced Studies still keep their reputed historical "Diploma di Perfezionamento" PhD title by law and MIUR Decree.
==== Admission ====
Doctorate courses are open, without age or citizenship limits, to all those who already hold a "laurea magistrale" (master degree) or similar academic title awarded abroad which has been recognised as equivalent to an Italian degree by the Committee responsible for the entrance examinations.
The number of places on offer each year and details of the entrance examinations are set out in the examination announcement.
=== Poland ===
In Poland, a doctoral degree (Pol. doktor), abbreviated to PhD (Pol. dr) is an advanced academic degree awarded by universities in most fields and by the Polish Academy of Sciences, regulated by the Polish parliament acts and the government orders, in particular by the Ministry of Science and Higher Education of the Republic of Poland. Students with a master's degree or equivalent are accepted to a doctoral entrance exam. The title of PhD is awarded to a scientist who has completed a minimum of three years of PhD studies (Pol. studia doktoranckie; not required to obtain PhD), finished a theoretical or laboratory scientific work, passed all PhD examinations; submitted the dissertation, a document presenting the author's research and findings, and successfully defended the doctoral thesis. Typically, upon completion, the candidate undergoes an oral examination, always public, by a supervisory committee with expertise in the given discipline.
=== Scandinavia ===
The doctorate was introduced in Sweden in 1477 and in Denmark–Norway in 1479 and awarded in theology, law, and medicine, while the magister's degree was the highest degree at the Faculty of Philosophy, equivalent to the doctorate.
Scandinavian countries were among the early adopters of a degree known as a doctorate of philosophy, based upon the German model. Denmark and Norway both introduced the Dr. Phil(os). degree in 1824, replacing the Magister's degree as the highest degree, while Uppsala University of Sweden renamed its Magister's degree Filosofie Doktor (fil. dr) in 1863. These degrees, however, became comparable to the German Habilitation rather than the doctorate, as Scandinavian countries did not have a separate Habilitation.
The degrees were uncommon and not a prerequisite for employment as a professor; rather, they were seen as distinctions similar to the British (higher) doctorates (DLitt, DSc). Denmark introduced an American-style PhD, the ph.d., in 1989; it formally replaced the Licentiate's degree and is considered a lower degree than the dr. phil. degree; officially, the ph.d. is not considered a doctorate, but unofficially, it is referred to as "the smaller doctorate", as opposed to the dr. phil., "the grand doctorate." Holders of a ph.d. degree are not entitled to style themselves as "Dr." Currently Denmark distinctions between the dr. phil. as the proper doctorate and a higher degree than the ph.d., whereas in Norway, the historically analogous dr. philos. degree is officially regarded as equivalent to the new ph.d. Today, the Norwegian PhD degree is awarded to candidates who have completed a supervised doctoral programme at an institution, while candidates with a master's degree who have conducted research on their own may submit their work for a Dr. Philos. defence at a relevant institution. PhD candidates must complete one trial lecture before they can defend their thesis, whereas Dr. Philos. candidates must complete two trial lectures.
In Sweden, the doctorate of philosophy was introduced at Uppsala University's Faculty of Philosophy in 1863. In Sweden, the Latin term is officially translated into Swedish filosofie doktor and commonly abbreviated fil. dr or FD. The degree represents the traditional Faculty of Philosophy and encompasses subjects from biology, physics, and chemistry, to languages, history, and social sciences, being the highest degree in these disciplines. Sweden currently has two research-level degrees, the Licentiate's degree, which is comparable to the Danish degree formerly known as the Licentiate's degree and now as the ph.d., and the higher doctorate of philosophy, Filosofie Doktor. Some universities in Sweden also use the term teknologie doktor for doctorates awarded by institutes of technology (for doctorates in engineering or natural science related subjects such as materials science, molecular biology, computer science etc.). The Swedish term fil. dr is often also used as a translation of corresponding degrees from e.g. Denmark and Norway.
=== Singapore ===
Singapore has six universities offering doctoral study opportunities: National University of Singapore, Nanyang Technological University, Singapore Management University, Singapore Institute of Technology, Singapore University of Technology and Design, and Singapore University of Social Sciences.
=== South Africa ===
The first doctoral degree in South Africa was issued in 1899 by the University of the Cape of Good Hope (now University of South Africa or UNISA) and the first PhDs were conferred in the 1920s by the University of Cape Town and the University of the Witwatersrand. Owing to the influence of British colonialism, South African higher education bears profound similarities to the modern UK universities system. South Africa boasts twenty-six state universities, all of which offer doctoral degrees. Presently, only two private institutions offer accredited PhDs, including the South African Theological Seminary and St. Augustine College of South Africa. Typically, South African colleges and universities abbreviate Doctor of Philosophy with either PhD or DPhil.
==== Admission ====
South African PhD programs require both a four-year undergraduate and a relevant graduate degree. Certain PhD programs require preexisting knowledge of research languages or field experience. Some programs require applicants undergo an interview or provide references, a curriculum vitae, and letters of recommendation. Typically, PhD applicants must furnish a provisional research proposal which discloses the basic trajectory of their area of interest. English competency is a universal requirement.
==== Structure and duration ====
Akin to PhD programs in the UK and in the Netherlands, South African PhD programs consist of a research thesis or dissertation produced under the supervision of a subject-matter expert. South African PhD programs are designed to result in a substantial piece of scholarship that has undergone critical evaluation through peer review. Unlike PhD programs in many other African countries or the US, South African PhD programs rarely involve coursework and are undertaken through rigorous and semi-independent research. Most South African PhD programs are designed to be completed within three to six years.
=== Spain ===
In Spain, doctoral degrees are regulated by Real Decreto (Royal Decree in Spanish) 99/2011 from the 2014/2015 academic year. They are granted by a university on behalf of the King, and its diploma has the force of a public document. The Ministry of Science keeps a National Registry of Theses called TESEO.
All doctoral programs are of a research nature. The studies should include original results and can take a maximum of three years, although this period can be extended under certain circumstances to 5 years.
The student must write their thesis presenting a new discovery or original contribution to science. If approved by her or his "thesis director (or directors)", the study will be presented to a panel of 3–5 distinguished scholars. Any doctor attending the public presentations is allowed to challenge the candidate with questions on their research. If approved, they will receive the doctorate. Four marks can be granted: Unsatisfactory, Pass, Satisfactory, and Excellent. "Cum laude" (with all honours, in Latin) denomination can be added to the Excellent ones if all five members of the tribunal agree.
The social standing of doctors in Spain was evidenced by the fact that Philip III let PhD holders to take seat and cover their heads during an act in the University of Salamanca in which the King took part so as to recognise their merits. This right to cover their heads in the presence of the King is traditionally reserved in Spain to Grandees and Dukes. The concession is remembered in solemn ceremonies held by the university by telling Doctors to take seat and cover their heads as a reminder of that royal leave.
All Doctor Degree holders are reciprocally recognized as equivalent in Germany and Spain ("Bonn Agreement of November 14, 1994").
=== Ukraine ===
In Ukraine, starting in 2016, in Ukraine Doctor of Philosophy (PhD, Ukrainian: Доктор філософії) is the highest education level and the first science degree. PhD is awarded in recognition of a substantial contribution to scientific knowledge, origination of new directions and visions in science. A PhD degree is a prerequisite for heading a university department in Ukraine. Upon completion of a PhD, a PhD holder can elect to continue their studies and get a post-doctoral degree called "Doctor of Sciences" (DSc. Ukrainian: Доктор наук), which is the second and the highest science degree in Ukraine.
=== United Kingdom ===
==== Admission ====
In the United Kingdom, universities admit applicants to PhD programs on a case-by-case basis; depending on the university, admission is typically conditional on the prospective student having completed an undergraduate degree with at least upper second-class honours or a postgraduate master's degree but requirements can vary even within institutions. For example, the University of Edinburgh requires a minimum of a 2:1 honours degree (or international equivalent) for a PhD in clinical psychology, while its business school requires a master's degree with an average of 65% in the taught components and a distinction-level dissertation.
For students who are not from English-speaking countries, UK Visas and Immigration requires universities to assess English proficiency. Many do this using IELTS tests, although the requirements may vary depending on the institution. 143 UK universities require applicants to undergo IELTS before admission, with minimum acceptable scores ranging from 4 to 6.5 and above. However, some universities are willing to accept students without IELTS.
Students are first accepted onto an MPhil or MRes programme and may transfer to PhD regulations upon satisfactory progress, this is sometimes referred to as APG (Advanced Postgraduate) status. This is typically done after one or two years and the research work done may count towards the PhD degree. If a student fails to make satisfactory progress, they may be offered the opportunity to write up and submit for an MPhil degree, e.g. at King's College London and the University of Manchester. In many universities, the MPhil is also offered as a stand-alone research degree.
PhD students from outside the EU/EEA or other exempt countries are required to comply with the Academic Technology Approval Scheme (ATAS), which involves undergoing a security clearance process with the Foreign Office for courses in sensitive areas where research could be used for weapons development. This requirement was introduced in 2007 due to concerns about overseas terrorism and weapons proliferation.
==== Funding ====
In the United Kingdom, funding for PhD students is sometimes provided by government-funded Research Councils (UK Research and Innovation – UKRI) or the European Social Fund, usually in the form of a tax-free bursary which consists of tuition fees together with a stipend. Tuition fees are charged at different rates for "Home/EU" and "Overseas" students, generally £3,000–£6,000 per year for the former and £9,000–14,500 for the latter (which includes EU citizens who have not been normally resident in the EEA for the last three years), although this can rise to over £16,000 at elite institutions. Higher fees are often charged for laboratory-based degrees. As of 2022/23, the national indicative fee for PhD students is £4,596, increasing annually, typically with inflation; there is no regulation of the fees charged by institutions, but if they charge a higher fee they may not require Research Council funded students to make up any difference themselves.
As of 2022/23, the national minimum stipend for UKRI-funded students is £16,062 per year, increasing annually typically with inflation. The period of funding for a PhD project is between three and four years, depending on the research council and the decisions of individual institutions, with extensions in funding of up to twelve months available to offset periods of absence for maternity leave, shared
parental leave, adoption leave, absences covered by a medical certificate, and extended
jury service. PhD work beyond this may be unfunded or funded from other sources. A very small number of scientific studentships are sometimes paid at a higher rate – for example, in London, Cancer Research UK, the ICR and the Wellcome Trust stipend rates start at around £19,000 and progress annually to around £23,000 a year; an amount that is tax and national insurance free. Research Council funding is distributed to Doctoral Training Partnerships and Centres for Doctoral Training, who are responsible for student selection, within the eligibility guidelines established by the Research Councils. The ESRC (Economic and Social Science Research Council), for example, explicitly state that a 2.1 minimum (or a master's degree) is required.
Many students who are not in receipt of external funding may choose to undertake the degree part-time, thus reducing the tuition fees. The tuition fee per annum for part-time PhD degrees are typically 50–60% of the equivalent full-time doctorate. However, since the duration of a part-time PhD degree is longer than a full-time degree, the overall cost may be the same or higher. The part-time PhD degree option provides free time in which to earn money for subsistence. Students may also take part in tutoring, work as research assistants, or (occasionally) deliver lectures, at a rate of typically £12–14 per hour, either to supplement existing low income or as a sole means of funding.
==== Completion ====
There is usually a preliminary assessment to remain in the program and the thesis is submitted at the end of a three- to four-year program. These periods are usually extended pro rata for part-time students. With special dispensation, the final date for the thesis can be extended for up to four additional years, for a total of seven, but this is rare. For full-time PhDs, a four-year time limit has now been fixed and students must apply for an extension to submit a thesis past this point. Since the early 1990s, British funding councils have adopted a policy of penalising departments where large proportions of students fail to submit their theses in four years after achieving PhD-student status (or pro rata equivalent) by reducing the number of funded places in subsequent years. Inadvertently, this leads to significant pressure on the candidate to minimise the scope of projects with a view on thesis submission, regardless of quality, and discourage time spent on activities that would otherwise further the impact of the research on the community (e.g., publications in high-impact journals, seminars, workshops). Furthermore, supervising staff are encouraged in their career progression to ensure that the PhD students under their supervision finalise the projects in three rather than the four years that the program is permitted to cover. These issues contribute to an overall discrepancy between supervisors and PhD candidates in the priority they assign to the quality and impact of the research contained in a PhD project, the former favouring quick PhD projects over several students and the latter favouring a larger scope for their own ambitious project, training, and impact.
There has recently been an increase in the number of Integrated PhD programs available, such as at the University of Southampton. These courses include a Master of Research (MRes) in the first year, which consists of a taught component as well as laboratory rotation projects. The PhD must then be completed within the next three years. As this includes the MRes all deadlines and timeframes are brought forward to encourage completion of both MRes and PhD within four years from commencement. These programs are designed to provide students with a greater range of skills than a standard PhD, and for the university, they are a means of gaining an extra years' fees from public sources.
==== Other doctorates ====
Some UK universities (e.g. Oxford) abbreviate their Doctor of Philosophy degree as "DPhil", while most use the abbreviation "PhD"; but these are stylistic conventions, and the degrees are in all other respects equivalent.
In the United Kingdom, PhD degrees are distinct from other doctorates, most notably the higher doctorates such as DLitt (Doctor of Letters) or DSc (Doctor of Science), which may be granted on the recommendation of a committee of examiners on the basis of a substantial portfolio of submitted (and usually published) research. However, some UK universities still maintain the option of submitting a thesis for the award of a higher doctorate.
Recent years have seen the introduction of professional doctorates, which are the same level as PhDs but more specific in their field. Most tend not to be solely academic, but combine academic research, a taught component or a professional qualification. These are most notably in the fields of engineering (EngD), educational psychology (DEdPsych), occupational psychology (DOccPsych), clinical psychology (DClinPsych), health psychology (DHealthPsy), social work (DSW), nursing (DNP), public administration (DPA), business administration (DBA), and music (DMA). A more generic degree also used is DProf or ProfD. These typically have a more formal taught component consisting of smaller research projects, as well as a 40,000–60,000-word thesis component, which together are officially considered equivalent to a PhD degree.
=== United States ===
In the United States, the PhD degree is the highest academic degree awarded by universities in most fields of study. There are more than 282 universities in the United States that award the PhD degree, and those universities vary widely in their criteria for admission, as well as the rigor of their academic programs.
==== Requirements ====
Typically, PhD programs require applicants to have a bachelor's degree in a relevant field, and, in many cases in the humanities, a master's degree, reasonably high grades, several letters of recommendation, relevant academic coursework, a cogent statement of interest in the field of study, and satisfactory performance on a graduate-level exam specified by the respective program (e.g., GRE, GMAT).
==== Duration, age structure, statistics ====
Depending on the specific field of study, completion of a PhD program usually takes four to eight years of study after the bachelor's degree; those students who begin a PhD program with a master's degree may complete their PhD degree a year or two sooner. As PhD programs typically lack the formal structure of undergraduate education, there are significant individual differences in the time taken to complete the degree. Overall, 57% of students who begin a PhD program in the US will complete their degree within ten years, approximately 30% will drop out or be dismissed, and the remaining 13% of students will continue on past ten years.
The median age of PhD recipients in the US is 32 years. While many candidates are awarded their degree in their 20s, 6% of PhD recipients in the US are older than 45 years.
The number of PhD diplomas awarded by US universities has risen nearly every year since 1957, according to data compiled by the US National Science Foundation. In 1957, US universities awarded 8,611 PhD diplomas; 20,403 in 1967; 31,716 in 1977; 32,365 in 1987; 42,538 in 1997; 48,133 in 2007, and 55,006 in 2015.
==== Funding ====
PhD students at US universities typically receive a tuition waiver and some form of annual stipend. Many US PhD students work as teaching assistants or research assistants. Graduate schools increasingly encourage their students to seek outside funding; many are supported by fellowships they obtain for themselves or by their advisers' research grants from government agencies such as the National Science Foundation and the National Institutes of Health. Many Ivy League and other well-endowed universities provide funding for the entire duration of the degree program (if it is short) or for most of it, especially in the forms of tuition waivers/stipends.
=== USSR, Russian Federation and former Soviet Republics ===
==== Candidate of Science degree awarded by the State Higher Attestation Commission ====
In Russia, the degree of Candidate of Sciences (Russian: кандидат наук, Kandidat Nauk) was the first advanced research qualification in the former USSR (it was introduced there in 1934) and some Eastern Bloc countries (Czechoslovakia, Hungary) and is still awarded in some post-Soviet states (Russian Federation, Belarus, and others). According to "Guidelines for the recognition of Russian qualifications in the other European countries," in countries with a two-tier system of doctoral degrees (like Russian Federation, some post-Soviet states, Germany, Poland, Austria and Switzerland), should be considered for recognition at the level of the first doctoral degree, and in countries with only one doctoral degree, the degree of Candidate of Sciences should be considered for recognition as equivalent to this PhD degree.
Since most education systems only have one advanced research qualification granting doctoral degrees or equivalent qualifications (ISCED 2011, par.270), the degree of Candidate of Sciences (Kandidat Nauk) of the former USSR countries is usually considered to be at the same level as the doctorate or PhD degrees of those countries.
According to the Joint Statement by the Permanent Conference of the Ministers for Education and Cultural Affairs of the Länder of the Federal Republic of Germany (Kultusministerkonferenz, KMK), German Rectors' Conference (HRK) and the Ministry of General and Professional Education of the Russian Federation, the degree of Candidate of Sciences is recognised in Germany at the level of the German degree of Doktor and the degree of Doktor Nauk at the level of German Habilitation. The Russian degree of Candidate of Sciences is also officially recognised by the Government of the French Republic as equivalent to French doctorate.
According to the International Standard Classification of Education, for purposes of international educational statistics, Candidate of Sciences belongs to ISCED level 8, or "doctoral or equivalent", together with PhD, DPhil, DLitt, DSc, LLD, Doctorate, or similar. It is mentioned in the Russian version of ISCED 2011 (par.262) on the UNESCO website as an equivalent to PhD belonging to this level. In the same way as PhD degrees awarded in many English-speaking countries, Candidate of Sciences allows its holders to reach the level of the Docent. The second doctorate (or post-doctoral degree) in some post-Soviet states called Doctor of Sciences (Russian: доктор наук, Doktor Nauk) is given as an example of second advanced research qualifications or higher doctorates in ISCED 2011 (par.270) and is similar to Habilitation in Germany, Poland and several other countries. It constitutes a higher qualification compared to PhD as against the European Qualifications Framework (EQF) or Dublin Descriptors.
About 88% of Russian students studying at state universities study at the expense of budget funds. The average stipend in Russia (as of August 2011) is $430 a year ($35/month). The average tuition fee in graduate school is $2,000 per year.
==== PhD degree awarded by university ====
On 19 June 2013, for the first time in the Russian Federation, defenses were held for the PhD degree awarded by universities, instead of the Candidate of Sciences degree awarded by the State Supreme Certification Commission.
Renat Yuldashev, the graduate of the Department of Applied Cybernetics of the Faculty of Mathematics and Mechanics of St. Petersburg State University, was the first to defend his thesis in field of mathematics according to new rules for the PhD SPbSU degree.
For the defense procedure in the field of mathematics, it was used the experience of joint Finnish-Russian research and educational program organized in 2007 by the Faculty of Information Technology of the University of Jyväskylä and the Faculty of Mathematics and Mechanics of St. Petersburg State University: co-chairs of the program — N. Kuznetsov, G. Leonov, P. Neittaanmäki, were organizers of the first defenses and co-supervisors of dissertations.
== Models of supervision ==
At some universities, there may be training for those wishing to supervise PhD studies. There is much literature available, such as Delamont, Atkinson, and Parry (1997). Dinham and Scott (2001) have argued that the worldwide growth in research students has been matched by the increase in the number of what they term "how-to" texts for both students and supervisors, citing examples such as Pugh and Phillips (1987). These authors report empirical data on the benefits to a PhD candidate from publishing; students are more likely to publish with adequate encouragement from their supervisors.
Wisker (2005) has reported that research into this field distinguishes two models of supervision:
The technical-rationality model of supervision, emphasising technique; and the negotiated order model, which is less mechanistic, emphasising fluid and dynamic change in the PhD process. These two models were first distinguished by Acker, Hill and Black (1994; cited in Wisker, 2005). Considerable literature exists on the expectations that supervisors may have of their students (Phillips & Pugh, 1987) and the expectations that students may have of their supervisors (Phillips & Pugh, 1987; Wilkinson, 2005) in the course of PhD supervision. Similar expectations are implied by the Quality Assurance Agency's Code for Supervision (Quality Assurance Agency, 1999; cited in Wilkinson, 2005).
== PhD in the workforce ==
PhD graduates represent a relatively small, elite group within most countries — around 1.1% of adults among OECD countries. Slovenia, Switzerland and Luxembourg have higher numbers of PhD Graduates per capita as illustrated here. For Slovenia, this is because MSc degrees before Bologna Process are ranked at the same level of education as PhD. Without the MSc, Slovenia has 1.4% PhD graduates, which is comparable to the average in OECD and EU-23 countries.
== International PhD equivalent degrees ==
== See also ==
History of higher education in the United States
List of fields of doctoral studies in the United States
Doctor of Professional Studies
Piled Higher and Deeper, a comic strip
Terminal degree
Doctor of Philosophy by publication
== References ==
== Further reading == | Wikipedia/Doctorate_of_philosophy |
The Chemistry Quality Eurolabels or European Quality Labels in Chemistry (Labels européens de Qualité en Chimie) is a marketing scheme for chemistry degrees at institutions located within the 45 countries involved in the Bologna process. Labels are awarded to qualifying institutions under the names are Eurobachelor and Euromaster, as well as the proposed Eurodoctorate. Label Committee not only prepares for the ECTN Administrative Council proposals to award the Eurolabels but also judge the quality of chemical education programmes at HEIs. ECTN and its Label Committee closely collaborates with EuCheMS and American Chemical Society.
It is a framework which is supported by EuCheMS, and the labels are awarded by ECTN. The project is supported by the European Commission (EC) through its SOCRATES programme. The purpose of the framework is to "promote recognition of first, second cycle degrees, and third cycle degrees not only within the 45 countries involved in the Bologna process".
== History ==
European Union promoted the Bologna process and the creation of a single European higher education area, both of which require mobility of graduates across Europe.
ECTN (European Chemistry Thematic Network) worked in the EU project "Tuning Educational Structures in Europe" and developed Eurobachelor, a framework for a first cycle qualification (first degree) in chemistry. EuCheMS approved Eurobachelor in October 2003.
In June 2004 the Bologna process seminar "Chemistry Studies in the European Higher Education Area" approved Eurobachelor.
== Label committees ==
The label committee members are as follows:
=== 2016–2018 ===
Reiner Salzer (Chair), TU Dresden, Dresden, Germany
Martino Di Serio (Vice-Chair), University of Naples Federico II, Naples, Italy
Jiří Barek (Secretary for Internal Matters), Charles University, Prague, Czech Republic
Gergely Tóth (Secretary for External Matters), Eötvös Loránd University, Budapest, Hungary
=== 2015–2016 ===
Reiner Salzer (Chair), TU Dresden, Dresden, Germany
Martino Di Serio (Vice-Chair), University of Naples Federico II, Naples, Italy
Ray Wallace (Secretary), Nottingham Trent University, Nottingham, UK
a number of members
=== 2014–2015 ===
Reiner Salzer (Chair), TU Dresden, Dresden, Germany
Pavel Drašar (Past-Chair), University of Chemistry and Technology, Prague, Czech Republic
Ray Wallace (Secretary), Nottingham Trent University, Nottingham, UK
a number of members
=== 2013–2014 ===
Reiner Salzer (Chair), TU Dresden, Dresden, Germany
Pavel Drašar (Past-Chair), University of Chemistry and Technology, Prague, Czech Republic
Evangelia Varella (Secretary), University of Thessaloniki, Thessaloniki, Greece
a number of members
=== 2008–2013 ===
Pavel Drašar (Chair), University of Chemistry and Technology, Prague, Czech Republic
Reiner Salzer (Vice-Chair), TU Dresden, Dresden, Germany
Richard Whewell (Secretary 2008), Strathclyde University, Glasgow, Scotland, UK
Evangelia Varella (Secretary 2008–2013), University of Thessaloniki, Thessaloniki, Greece
a number of members
=== 2006–2008 ===
Raffaella Pagani (Chair)
Pavel Drašar (Vice-Chair), University of Chemistry and Technology, Prague, Czech Republic
Terry Mitchell (Secretary)
a number of members
=== 2004–2006 ===
Terry Mitchell (Chair)
Raffaella Pagani (Vice-Chair)
David Barr (Secretary), Royal Society of Chemistry, UK
a number of members
== Eurobachelor ==
Eurobachelor is a registered trademark and an initiative adopted by the EuCheMS General Assembly in 2003. It is associated with the Chemistry Quality Eurolabels. As of 8 April 2013, 60 Eurobachelor quality labels have been awarded. The label is intended for first cycle qualifications (bachelor's degrees).
Eurobachelor is based on 180 ECTS (European credits), which is comparable to the three-year British degrees, but it does not include the British concepts of honours degrees and ordinary degrees.
== Euromaster ==
Euromaster is a registered trademark and an initiative adopted by the EuCheMS General Assembly in 2005. It is associated with the Chemistry Quality Eurolabels. As of 8 April 2013, 36 Euromaster quality labels have been awarded. The label is intended for master's degrees.
Euromaster, introduced after Eurobachelor, is intended for second cycle qualifications (postgraduate degrees).
== Eurodoctorate ==
Eurodoctorate is associated with the Chemistry Quality Eurolabels. As of 8 April 2013, 1 Eurodoctorate quality label was awarded. The label is intended for third cycle qualifications (i.e. doctoral degrees).
The Tuning Chemistry Subject Area Group (Tuning SAG) discussed with a working party of ECTN (European Chemistry Thematic Network Association) in a meeting held in February 2006 in Helsinki, Finland, taking into account the declarations of the Bergen Communiqué 2005. The EHEA Overarching Framework, which was approved by the Ministers of Education of European Union member states in Bergen uses the Dublin descriptors and Tuning SAG decided to use the Dublin descriptors to form a new set of descriptors, the Budapest descriptors for third cycle qualifications.
The Chemistry Eurodoctorate Framework version 1 was published in November 2006.
== Awarded labels ==
As of 8 April 2013, 60 Eurobachelor, 36 Euromaster, and 1 Eurodoctorate labels have been awarded to 52 institutions and 3 consortia from 20 countries.
The countries that have been awarded labels include:
Austria
Belgium
Czech Republic
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Kazakhstan
Morocco
Netherlands
Poland
Portugal
Slovakia
Slovenia
Spain
United Kingdom
== See also ==
Bologna process
European higher education area
European Chemistry Thematic Network Association
EChemTest
Tuning Educational Structures in Europe (European Union project)
EuCheMS (European Association for Chemical and Molecular Sciences)
Tuning Chemistry Subject Area Group
Dublin descriptor
European Quality Labels
== References ==
== External links ==
Official website | Wikipedia/Eurodoctorate |
The Engineering Doctorate (EngD, previously Professional Doctorate in Engineering or PDEng) is a Dutch degree awarded to graduates of a Technological Designer (engineering) program that develop their students' capabilities to work within a professional context. These programs focus on applied techniques and design, in their respective engineering fields. The technological EngD designer programs were initiated at the request of the Dutch high-tech industry. High-tech companies need professionals who can design and develop complex new products and processes and offer innovative solutions. All programs work closely together with high-tech industry, offering trainees the opportunity to participate in large-scale, interdisciplinary design projects. With this cooperation, EngD programs provide trainees a valuable network of contacts in industry. Each program covers a different technological field, for example managing complex architectural construction projects, designing mechanisms for user interfaces for consumer products or developing high-tech software systems for software-intensive systems. Participation in a program that awards the abbreviation EngD requires either a Master's degree in a related field or an accredited B.Sc. degree (at least three years and 180 ECTS) in computer science (or a strongly related scientific or engineering discipline) combined with min. 5 years of relevant academic work experience.
PDEng degrees can be obtained at four technical Universities in the Netherlands, Delft University of Technology, Eindhoven University of Technology, University of Twente, and Wageningen University & Research. Between these universities interscholastic cooperation programs exist like the 4TU Federation and its Stan Ackermans Institute.
The title PDEng is regarded as equivalent to the Engineering Doctorate (EngD), and as of 1 September 2022, the PDEng title in the Netherlands has been renamed to EngD.
== Accreditation ==
The three or two-year, full-time post-master programs all lead to a Professional Doctorate in Engineering (PDEng) degree. At the end the university confers a Doctoral level degree, a PDEng, upon the candidate. The programs are certified by the Dutch Certification Committee for Technological Design Programs (CCTO or Dutch: Nederlandse Certificatiecommissie voor Opleidingen tot Technologisch Ontwerper), which represents the interests of the organization of Netherlands Industry Entrepreneurs and Employers (VNO-NCW/MKB Netherlands) and the Royal Netherlands Society of Engineers (Dutch: Koninklijk Instituut Van Ingenieurs - KIVI). The CCTO's main goal is to ensure that such degrees hold to the high standards established by both academia and industry. The committee reviews these degree programs every five years to ensure continued standards compliance.
Although the PDEng and PhD are both recognised as postgraduate degrees, they are not the same. A PDEng is practically oriented and does not require a body of original academic research to graduate.
== History ==
The Professional Doctorate in Engineering (PDEng) name owes its existence to the adoption of the Bachelor/Master (Ba/Ma) degree system after the Bologna Process. While most Dutch universities adopted the Ba/Ma system in 2001, the PDEng was known as the official Master of Technical Design (MTD) degree until 2004. From that date the Executive Boards of 3TU (TU/e, TU Delft and University of Twente) decided jointly to use PDEng. Later in 2022, the Executive Boards of 4TU (3TU + Wageningen University & Research) decided to change the degree from Professional Doctorate in Engineering (PDEng) to Engineering Doctorate (EngD) for graduates from the Technological Design or PDEng programs as of 1 September 2022.
== Admission ==
Application is open to university graduates from the Netherlands and other countries. A trainee must hold at least a Master of Science degree or equivalent, preferably in the exact sciences. Besides, applicant must have an interest in designing solutions for complex technological problems. There can be an assessment and selection procedure before entering a program. The PDEng programs use strict selection criteria to ensure the required high quality. Excellent marks, motivation and a design-oriented attitude are vitally important. A trainee should also have an excellent command of the English language. The exact admission and selection procedures are different at each program. All applications are judged by the Selection Committee.
== EngD programs at various Universities ==
Source:
EngD programs at the Delft University of Technology
Civil and Environmental Engineering (BPE)
Bioprocess Engineering (BPE)
Chemical Product Design (CPD)
Process and Equipment Design (PED)
EngD programs at the Eindhoven University of Technology
Automotive Systems Design (ASD)
Clinical Informatics (CI)
Data Science (DS)
Design of Electrical Engineering Systems - Track Healthcare Systems Design (DEES - HSD)
Design of Electrical Engineering Systems - Track Information and Communication Technology (DEES - ICT)
Industrial Engineering (IE)
Mechatronic Systems Design (MSD)
Process and Product Design (PPD)
Qualified Medical Engineer (QME)
Smart Energy Buildings and Cities (SEB&C)
Software Technology (ST)
Human System Interaction (HSI)
EngD programs at the University of Twente
Business & IT (BIT)
Civil Engineering (CE)
Energy & Process Technology (EPT)
Maintenance (MT)
Robotics (R)
EngD programs at the Wageningen University & Research
Design for AgriFood and Ecological Systems
EngD programs at the University of Groningen
Autonomous System
Sustainable Process Design
== References == | Wikipedia/Professional_Doctorate_in_Engineering |
Habilitation is the highest university degree, or the procedure by which it is achieved, in Germany, France, Italy, Poland and some other European and non-English-speaking countries. The candidate fulfills a university's set criteria of excellence in research, teaching, and further education, which usually includes a dissertation.
The degree, sometimes abbreviated Dr. habil. (Doctor habilitatus), dr hab. (doktor habilitowany), or D.Sc. (Doctor of Sciences in Russia and some CIS countries), is often a qualification for full professorship in those countries. In German-speaking countries it allows the degree holder to bear the title PD (for Privatdozent). In a number of countries there exists an academic post of docent, appointment to which often requires such a qualification. The degree conferral is usually accompanied by a public oral defence event (a lecture or a colloquium) with one or more opponents.
Habilitation is usually awarded 5–15 years after a PhD degree or its equivalent. Achieving this academic degree does not automatically give the scientist a paid position, though many people who apply for the degree already have steady university employment.
== History and etymology ==
The term habilitation is derived from the Medieval Latin habilitare, meaning "to make suitable, to fit", from Classical Latin habilis "fit, proper, skillful". The degree developed in the Holy Roman Empire of the German Nation in the seventeenth century (c. 1652). Initially, habilitation was synonymous with "doctoral qualification". The term became synonymous with "post-doctoral qualification" in Germany in the 19th century "when holding a doctorate seemed no longer sufficient to guarantee a proficient transfer of knowledge to the next generation". Afterwards, it became normal in the German university system to write two doctoral theses: the inaugural thesis (Inauguraldissertation), completing a course of study, and the habilitation thesis (Habilitationsschrift), which opens the road to a professorship.
== Prevalence ==
Habilitation qualifications exist or existed in:
Algeria (Habilitation à diriger des recherches, "accreditation to supervise research", abbreviated HDR)
Armenia, Azerbaijan (Habil. dr.; currently abolished and no longer conferred, but those who have earned the degree earlier will use it for life)
Austria (formerly Univ.-Doz., now Priv.-Doz.)
Belarus (Доктар навук, Łacinka: Doktar navuk)
Belgium (French-speaking part: Agrégation de l'enseignement supérieur, until 2010)
Brazil (Livre-docência)
Bulgaria (Доцент, Docent)
Czech Republic (doc., docent)
Denmark (dr. med./scient./phil.)
Egypt (العالمية Ālimiyya/Al-Azhar)
Finland (Dosentti/Docent)
France (Habilitation à diriger des recherches, "accreditation to supervise research", abbreviated HDR)
Germany (Priv.-Doz. and/or Dr. habil.)
Greece (υφηγεσία, υφηγητής), abolished in 1983
Hungary (Dr. habil.)
Italy (Abilitazione scientifica nazionale, since 2012)
Latvia (Dr. habil.), since 1995 no longer conferred, but those who have earned the degree earlier will use it for life
Lithuania (Dr. habil.), since 2003 no longer conferred
Luxembourg (autorisation à diriger des recherches, "authorization to supervise research", or ADR)
Moldova
Poland (dr hab., doktor habilitowany)
Portugal (Agregação)
Romania (abilitare)
Russia, Kyrgyzstan, Kazakhstan, Uzbekistan, Ukraine (Доктор наук, Doktor nauk, "Doctor of Sciences")
Serbia (Доцент, Docent)
Slovakia (Docent)
Slovenia (Docent)
Spain (Accreditation of research – Agregado)
Sweden (Docent)
Switzerland (PD and/or Dr. habil.)
Tunisia (Habilitation à diriger des recherches, "accreditation to supervise research", abbreviated HDR)
== Process ==
A habilitation thesis can be either cumulative (based on previous research, be it articles or monographs) or monographical, i.e., a specific, unpublished thesis, which then tends to be very long. While cumulative habilitations are predominant in some fields (such as medicine), they have been, since about a century ago, almost unheard of in others (such as law).
The level of scholarship of a habilitation is considerably higher than for a doctoral dissertation in the same academic tradition in terms of quality and quantity, and must be accomplished independently, without direction or guidance of a faculty supervisor. In the sciences, publication of numerous (sometimes ten or more) research articles is required during the habilitation period of about four to ten years. In the humanities, a major book publication may be a prerequisite for defense.
It is possible to get a professorship without habilitation, if the search committee attests the candidate to have qualifications equaling those of a habilitation and the higher-ranking bodies (the university's senate and the country's ministry of education) approve. However, while some subjects make liberal use of this (e.g., the natural sciences in order to employ candidates from countries with different systems and the arts to employ active artists), in other subjects it is rarely done.
The habilitation is awarded after a public lecture, to be held after the thesis has been accepted, and after which the venia legendi (Latin: 'permission to read', i.e., to lecture) is bestowed. In some areas, such as law, philosophy, theology and sociology, the venia, and thus the habilitation, is given only for certain sub-fields (such as criminal law, civil law, or philosophy of science, practical philosophy, etc.); in others, for the entire field.
Although disciplines and countries vary in the typical number of years for obtaining habilitation after getting a doctorate, it usually takes longer than for the American academic tenure. For example, in Poland until 2018, the statutory time for getting a habilitation (traditionally, although not obligatorily, relying on a book publication) is eight years. Theoretically, if an assistant professor does not succeed in obtaining habilitation in this time, they should be moved to a position of a lecturer, with a much higher teaching load and no research obligations, or even be dismissed. In practice, however, on many occasions schools extend the deadlines for habilitation for most scholars if they do not make it in time, and there is evidence that they are able to finish it in the near future.
=== Austria ===
In Austria, the procedure is currently regulated by national law (Austrian University Act UG2002 §103). The graduation process includes additionally to the sub-commission of the senate (including students representatives for a hearing on the teaching capabilities of the candidate) an external reviewer. Holding a habilitation allows academics to do research and supervise (PhD, MSc, ...) on behalf of this university. As it is an academic degree, this is even valid if the person is not enrolled (or not enrolled anymore) at this institution (Habilitation ad personam). Appointment to a full professorship with an international finding commission includes a venia docendi (UG2002 §98(12)), which is restricted to the time of the appointment (UG2002 §98(13) – Habilitation ad positionem).
While the habilitation ensures the rights of the independent research and the supervision, it is on behalf of the statute of the universities to give those rights also to, e.g., associate professors without habilitation. Currently the major Austrian universities do that only for master's level students, but not for PhD programs.
=== Brazil ===
Livre-docência is a title (similar to Habilitation in Germany) granted to holders of doctorate degrees upon submission of a cumulative thesis followed by a viva voce examination. It has practically disappeared amongst Brazilian Federal HEIs. It is still required at a few institutions for admissions as a full professor (professor titular), most notably in the three state universities of the state of São Paulo, as well as at the Federal University of São Paulo (UNIFESP).
=== France ===
The degree of Docteur d'État (State Doctor) or Doctorat d'État (State Doctorate), called Doctorat ès lettres (Doctor of Letters) or Doctorat ès sciences (Doctor of Sciences) before the 1950s, formerly awarded by universities in France had a somewhat similar purpose. In 1984, the Doctorat d'État was replaced by the Habilitation à diriger des recherches.
The award of the French habilitation is a general requirement for being the main supervisor of PhD students and to be eligible for full professor positions. The official eligibility named qualification is granted by the French Conseil National des Universités (CNU). Members of Directeur de Recherche corps who are assimilated to full professors by the CNU do not require the French habilitation to supervise PhD students. Depending on the field, the French habilitation requires consistent research from five to ten years after appointment as an associate professor (maître de conférences), a substantial amount of significant publications, the supervision of at least one PhD student from start to graduation, and/or a successful track record securing extramural funding as a principal investigator, as well as a sound, ambitious, and feasible five-year research project. Outstanding postdoctoral researchers who are not yet appointed to a university could also obtain the habilitation if they meet the requirements. The French habilitation committee is constituted by a majority of external and sometimes foreign referees. The French habilitation entitles associate professors (maîtres de conférences) to apply for full professor positions (professeur des universités).
=== Germany ===
In order to hold the rank of a full professor (W3) within the German university system, it is necessary to have obtained the habilitation (or "habilitation-equivalent achievements"). This can be demonstrated by leading a research group, being a junior professor, or other achievements in research and teaching as a post-doctoral researcher. The habilitation in Germany is usually earned after several years of independent research and teaching, either "internally" while working at a university or "externally" as a research and teaching-oriented practitioner. Once the habilitation thesis (Habilitationsschrift, often simply Habilitation) and all other requirements are completed, the candidate (called Habilitand/in in German) "has habilitated him- or herself" and receives an extension to his/her doctoral degree, namely Dr. habil. (with the specification, such as Dr. rer. nat. habil.). The habilitation is thus an additional qualification at a higher level than the German doctoral degree. Only those candidates receiving the highest (or second-highest) grade for their doctoral thesis are encouraged to proceed with a habilitation.
A typical procedure after completing the habilitation is that the successful researcher officially receives the so-called Venia legendi (Latin for "permission for lecturing") for a specific academic subject at universities (sometimes also referred to as Venia docendi, Latin for "right of teaching"). Someone in possession of the Venia legendi but not a professorship has the right to carry the title Privatdozent (for men) or Privatdozentin (for women), abbreviated PD or Priv.-Doz. The status as a PD requires doing some (generally unpaid) teaching in order to keep the title (Titellehre or titular teaching).
=== Italy ===
In the Italian legal system, habilitations are different types of acts and authorizations.
==== Habilitations for associate and full professorships in universities ====
Regarding university habilitations, the so-called Gelmini reform of the research and university teaching system (Italian Law 240 of year 2010 and subsequent modifications) has established the national scientific habilitation for the calls in the role of associate professor and full professor (called abilitazione scientifica nazionale, or ASN). This means that, as a prerequisite for being able to be selected by a university committee to fill these roles, it is necessary to have obtained the scientific qualification for the relative kind of teaching.
For STEM fields (so called "bibliometric fields"), the qualifications requires a two-step evaluation:
first a quantitative assessment, as each candidate for and ASN as associate or full professor must have at least two out of these three requirements: having published more papers than most associate or full professors in Italian universities, having received more citations than most associate or full professors in Italian universities, having an h-index higher than most associate or full professors in Italian universities;
then, a specific committee (one for each scientific sub-field) will qualitatively evaluate the scientific CV of the candidates, considering funding, mobility, autonomy of the research, awards won, and so on.
The successful candidate will then receive his or her ASN habilitation as associate or full professor (or, in some instances, for both) and may thus apply for those vacancies in Italian universities.
The ASN habilitation also allows to compete for three-year tenure-track assistant professorship positions (which were once called RTDb in the Italian system, for ricercatore a tempo determinato di tipo b, and now are labeled as RTT, "ricercatore in tenure-track"). At the end of the three-year contract the assistant professor must have a valid ASN habilitation in order to become a permanent associate professor; otherwise, he or she is permanently laid off.
To prevent this (which may be disastrous to already undermanned Italian departments), it is common practice to award RTDb positions to people already habilitated as associate or full professors, which is in practical contrast with the spirit of the Gelmini-reform.
If an ASN habilitation application fails, the candidate can apply again, but only after a 12-month hiatus.
The ASN habilitation was initially valid for four years only, but this validity term was extended many times. It was first extended to six years, then 9 years, then 11 years (2023). Currently (2025), due to extreme scarcity of tenure track positions in Italy, the ASN habilitation validity has been increased to 12 years by yet another government decree in order not to let the first awarded habilitations expire (with consequent protests and possible lawsuits).
==== Professional qualifications ====
In the field of free regulated professions, protected by a professional body (architects, lawyers, engineers, doctors, pharmacists, journalists, etc.), it identifies the state examination, more properly called "state examination for the qualification for the exercise of professions" that allows already graduated students or those with the necessary titles to register on the list of professionals and work. Many state exams include the possession of a specific qualification among the requirements. For example, to take the habilitation exam for to become an architect or an engineer, one must have already graduated from a university. However, in order to actually practice the profession it is necessary to register with the relevant professional association and, if the profession is exercised independently, it is necessary to have a VAT number. These exams are usually organized by the professional orders with the collaboration of area universities.
In other cases, especially in the case of health professions or childcare professionals, not protected by a professional nature, the degree itself is a qualifying title.
Finally, some habilitations, since their activities cannot be done autonomously, need to be hired in a suitable structure in order to effectively carry out the profession in question. This is for example the case of the education sector: once the qualifying examination has been passed, a public competition must be won for recruitment in an upper or lower secondary school.
=== Portugal ===
In the Portuguese legal system, there are two types of habilitation degree. The first is normally given to university professors and is named agregação or Decree of Law 239/2007 while the second is named habilitação (Decree of Law 124/99) and is used by doctoral researchers working in institutes outside universities. Legally, they are equivalent and are required for a professor (agregação) or a researcher (habilitação) to reach the top of their specific careers (full professor or coordinator researcher). Both degrees aim to evaluate if the candidate has achieved significant outputs in terms of scientific activity, including activities in postgraduation supervision.
The process to obtain any of the degrees is very similar, with minor changes. Any PhD holder can submit a habilitation thesis to obtain the degree. For agregação, the thesis is composed by a detailed CV of the achievements obtained after concluding the PhD, a detailed report of an academic course taught at the university (or a proposed course to be taught), and the summary of a lesson to be given. For habilitação, the academic course report is replaced by a research proposal or a proposal for advanced PhD studies.
After the candidate submits the habilitation thesis, a jury composed of five to nine full professors or coordinator researchers first evaluates the submitted documents; the majority needs to approve the candidate's request. If approved, the candidate then needs to defend their thesis in a two-day public defense. The public defense lasts for two hours each day. On the first day, the curriculum of the candidate is discussed (for both degrees) and in the case of agregação, the candidate also needs to present the academic course selected. On the second day, the candidate needs to present a lecture (agregação) or a proposal of a research project (habilitação).
== Equivalent degrees ==
The Doctor of Science in Russia and some other countries formerly part of the Soviet Union or the Eastern bloc is equivalent to a habilitation. The cumulative form of the habilitation can be compared to the higher doctorates, such as the D.Sc. (Doctor of Science), Litt.D. (Doctor of Letters), LL.D. (Doctor of Laws) and D.D. (Doctor of Divinity) found in the UK, Ireland, and some Commonwealth countries, which are awarded on the basis of a career of published work. However, higher doctorates from these countries (except Russia) are often not recognized by any German state as being equivalent to the habilitation. In 1999, Russia and Germany signed a Statement on Mutual Academic Recognition of Russian Academic Degrees and German Academic Qualifications, including the equivalence of the Russian Doctor of Science and the German Habilitation qualification.
Furthermore, the position or title of an associate professor (or higher) at a European Union–based university is systematically translated into or compared to titles such as Universitätsprofessor (W2) (Germany), førsteamanuensis (Norway), or Doktor hab. (Poland) by institutions such as the European Commission Directorate-General for Research, and therefore usually implies the holder of such title has a degree equivalent to habilitation.
== Debate ==
=== German habilitation debate ===
In 2004, the habilitation was the subject of a major political debate in Germany. The former Federal Minister for Education and Science, Edelgard Bulmahn, aimed to abolish the system of the habilitation and replace it with the alternative concept of the junior professor: a researcher should first be employed for up to six years as a "junior professor" (a non-tenured position roughly equivalent to assistant professor in the United States) and so prove his/her suitability for holding a tenured professorship.
Many, especially researchers in the natural sciences, as well as young researchers, have long demanded the abandonment of the habilitation as they think it to be an unnecessary and time-consuming obstacle in an academic career, contributing to the brain drain of talented young researchers who think their chances of getting a professorship at a reasonable age to be better abroad and hence move, for example, to the UK or the USA. Many feel overly dependent on their supervising principal investigators (the professor heading the research group) since superiors have power to delay the process of completing the habilitation. A further problem comes with funding support for those who wish to pursue a habilitation, where older candidates often feel discriminated against, for example under the DFG's Emmy-Noether programme. Furthermore, internal "soft" money might only be budgeted to pay for younger postdoctoral scientists. Because of the need to chase short-term research contracts, many researchers in the natural sciences apply for more transparent career development opportunities in other countries. In summary, a peer-reviewed demonstration of a successful academic development and an international outlook is considered more than compensation for an habilitation where there is evidence of grant applications, well-cited publications, a network of collaborators, lecturing and organisational experience, and experience of having worked and published abroad.
On the other hand, amongst many senior researchers, especially in medicine, the humanities and the social sciences, the habilitation was—and still is—regarded as a valuable instrument of quality control before giving somebody a tenured position for life.
Bavaria, Saxony and Thuringia, three states with conservative governments, filed suit at the German Constitutional Court against the new law replacing the habilitation with the junior professor. The Court concurred with their argument that the Bundestag (the federal parliament) cannot pass such a law, because the German constitution explicitly states that affairs of education are the sole responsibility of the states and declared the law to be invalid in June 2004. In reaction, a new federal law was passed, giving the states more freedom regarding habilitations and junior professors. The junior professor has since been legally established in all states, but it is still possible—and encouraged—for an academic career in many subjects in Germany to pursue a habilitation.
== See also ==
Habilitation to Supervise Research
Postdoctoral research
== References ==
== Further reading ==
A short description of PhD and Habilitation at the Free University of Berlin, Germany
Education in Austria at the European Education Directory
Germany tries to break its Habilitation Habit article in the science magazin of the AAAS
Habilitation procedure at the Technical University of Munich, Germany
Higher Education in Hungary at the Encyclopædia Britannica
Habilitation in Romania | Wikipedia/State_doctorate |
Human science (or human sciences in the plural) studies the philosophical, biological, social, justice, and cultural aspects of human life. Human science aims to expand the understanding of the human world through a broad interdisciplinary approach. It encompasses a wide range of fields - including history, philosophy, sociology, psychology, justice studies, evolutionary biology, biochemistry, neurosciences, folkloristics, and anthropology. It is the study and interpretation of the experiences, activities, constructs, and artifacts associated with human beings. The study of human sciences attempts to expand and enlighten the human being's knowledge of its existence, its interrelationship with other species and systems, and the development of artifacts to perpetuate the human expression and thought. It is the study of human phenomena. The study of the human experience is historical and current in nature. It requires the evaluation and interpretation of the historic human experience and the analysis of current human activity to gain an understanding of human phenomena and to project the outlines of human evolution. Human science is an objective, informed critique of human existence and how it relates to reality.Underlying human science is the relationship between various humanistic modes of inquiry within fields such as history, sociology, folkloristics, anthropology, and economics and advances in such things as genetics, evolutionary biology, and the social sciences for the purpose of understanding our lives in a rapidly changing world. Its use of an empirical methodology that encompasses psychological experience in contrasts with the purely positivistic approach typical of the natural sciences which exceeds all methods not based solely on sensory observations. Modern approaches in the human sciences integrate an understanding of human structure, function on and adaptation with a broader exploration of what it means to be human. The term is also used to distinguish not only the content of a field of study from that of the natural science, but also its methodology.
== Meaning of 'science' ==
Ambiguity and confusion regarding the usage of the terms 'science', 'empirical science', and 'scientific method' have complicated the usage of the term 'human science' with respect to human activities. The term 'science' is derived from the Latin scientia, meaning 'knowledge'. 'Science' may be appropriately used to refer to any branch of knowledge or study dealing with a body of facts or truths systematically arranged to show the operation of general laws.
However, according to positivists, the only authentic knowledge is scientific knowledge, which comes from the positive affirmation of theories through strict scientific methods the application of knowledge, or mathematics. As a result of the positivist influence, the term science is frequently employed as a synonym for empirical science. Empirical science is knowledge based on the scientific method, a systematic approach to verification of knowledge first developed for dealing with natural physical phenomena and emphasizing the importance of experience based on sensory observation. However, even with regard to the natural sciences, significant differences exist among scientists and philosophers of science with regard to what constitutes valid scientific method—for example, evolutionary biology, geology and astronomy, studying events that cannot be repeated, can use the method of historical narratives. More recently, usage of the term has been extended to the study of human social phenomena. Thus, natural and social sciences are commonly classified as science, whereas the study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts are referred to as the humanities. Ambiguity with respect to the meaning of the term science is aggravated by the widespread use of the term formal science with reference to any one of several sciences that is predominantly concerned with abstract form that cannot be validated by physical experience through the senses, such as logic, mathematics, and the theoretical branches of computer science, information theory, and statistics.
== History ==
The phrase 'human science' in English was used during the 17th-century scientific revolution, for example by Theophilus Gale, to draw a distinction between supernatural knowledge (divine science) and study by humans (human science). John Locke also uses 'human science' to mean knowledge produced by people, but without the distinction. By the 20th century, this latter meaning was used at the same time as 'sciences that make human beings the topic of research'.
=== Early development ===
The term "moral science" was used by David Hume (1711–1776) in his Enquiry concerning the Principles of Morals to refer to the systematic study of human nature and relationships. Hume wished to establish a "science of human nature" based upon empirical phenomena, and excluding all that does not arise from observation. Rejecting teleological, theological and metaphysical explanations, Hume sought to develop an essentially descriptive methodology; phenomena were to be precisely characterized. He emphasized the necessity of carefully explicating the cognitive content of ideas and vocabulary, relating these to their empirical roots and real-world significance.
A variety of early thinkers in the humanistic sciences took up Hume's direction. Adam Smith, for example, conceived of economics as a moral science in the Humean sense.
=== Later development ===
Partly in reaction to the establishment of positivist philosophy and the latter's Comtean intrusions into traditionally humanistic areas such as sociology, non-positivistic researchers in the humanistic sciences began to carefully but emphatically distinguish the methodological approach appropriate to these areas of study, for which the unique and distinguishing characteristics of phenomena are in the forefront (e.g., for the biographer), from that appropriate to the natural sciences, for which the ability to link phenomena into generalized groups is foremost. In this sense, Johann Gustav Droysen contrasted the humanistic science's need to comprehend the phenomena under consideration with natural science's need to explain phenomena, while Windelband coined the terms idiographic for a descriptive study of the individual nature of phenomena, and nomothetic for sciences that aim to defthe generalizing laws.
Wilhelm Dilthey brought nineteenth-century attempts to formulate a methodology appropriate to the humanistic sciences together with Hume's term "moral science", which he translated as Geisteswissenschaft - a term with no exact English equivalent. Dilthey attempted to articulate the entire range of the moral sciences in a comprehensive and systematic way.: Chap. I Meanwhile, his conception of “Geisteswissenschaften” encompasses also the abovementioned study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts. He characterized the scientific nature of a study as depending upon:: Chapter XI
The conviction that perception gives access to reality
The self-evident nature of logical reasoning
The principle of sufficient reason
But the specific nature of the Geisteswissenschaften is based on the "inner" experience (Erleben), the "comprehension" (Verstehen) of the meaning of expressions and "understanding" in terms of the relations of the part and the whole – in contrast to the Naturwissenschaften, the "explanation" of phenomena by hypothetical laws in the "natural sciences".: p. 86
Edmund Husserl, a student of Franz Brentano, articulated his phenomenological philosophy in a way that could be thought as a bthesis of Dilthey's attempt. Dilthey appreciated Husserl's Logische Untersuchungen (1900/1901, the first draft of Husserl's Phenomenology) as an “ep"epoch-making"istemological foundation of fors conception of Geisteswissenschaften.: p. 14
In recent years, 'human science' has been used to refer to "a philosophy and approach to science that seeks to understand human experience in deeply subjective, personal, historical, contextual, cross-cultural, political, and spiritual terms. Human science is the science of qualities rather than of quantities and closes the subject-object split in science. In particular, it addresses the ways in which self-reflection, art, music, poetry, drama, language and imagery reveal the human condition. By being interpretive, reflective, and appreciative, human science re-opens the conversation among science, art, and philosophy."
== Objective vs. subjective experiences ==
Since Auguste Comte, the positivistic social sciences have sought to imitate the approach of the natural sciences by emphasizing the importance of objective external observations and searching for universal laws whose operation is predicated on external initial conditions that do not take into account differences in subjective human perception and attitude. Critics argue that subjective human experience and intention plays such a central role in determining human social behavior that an objective approach to the social sciences is too confining. Rejecting the positivist influence, they argue that the scientific method can rightly be applied to subjective, as well as objective, experience. The term subjective is used in this context to refer to inner psychological experience rather than outer sensory experience. It is not used in the sense of being prejudiced by personal motives or beliefs.
== Human science in universities ==
Since 1878, the University of Cambridge has been home to the Moral Sciences Club, with strong ties to analytic philosophy.
The Human Science degree is relatively young. It has been a degree subject at Oxford since 1969. At University College London, it was proposed in 1973 by Professor J. Z. Young and implemented two years later. His aim was to train general science graduates who would be scientifically literate, numerate and easily able to communicate across a wide range of disciplines, replacing the traditional classical training for higher-level government and management careers. Central topics include the evolution of humans, their behavior, molecular and population genetics, population growth and aging, ethnic and cultural diversity ,and human interaction with the environment, including conservation, disease ,and nutrition. The study of both biological and social disciplines, integrated within a framework of human diversity and sustainability, should enable the human scientist to develop professional competencies suited to address such multidimensional human problems.
In the United Kingdom, Human Science is offered at the degree level at several institutions which include:
University of Oxford
University College London (as Human Sciences and as Human Sciences and Evolution)
King's College London (as Anatomy, Developmental & Human Biology)
University of Exeter
Durham University (as Health and Human Sciences)
Cardiff University (as Human and Social Sciences)
In other countries:
Osaka University
Waseda University
Tokiwa University
Senshu University
Aoyama Gakuin University (As College of Community Studies)
Kobe University
Kanagawa University
Bunkyo University
Sophia University
Ghent University (in the narrow sense, as Moral sciences, "an integrated empirical and philosophical study of values, norms and world views")
== See also ==
History of the Human Sciences (journal)
Social science
Humanism
Humanities
== References ==
== Bibliography ==
Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford
Hume, David, An Enquiry Concerning the Principles of Morals
== External links ==
Institute for Comparative Research in Human and Social Sciences (ICR) -Japan
Human Science Lab -London
Human Science(s) across Global Academies
Marxism philosophy | Wikipedia/Human_sciences |
A Doctor of Juridical Science (SJD; Latin: Scientiae Juridicae Doctor), or a Doctor of the Science of Law (JSD; Latin: Juridicae Scientiae Doctor), is a research doctorate degree in law that is equivalent to a Ph.D. degree. In most countries, it is the most advanced law degree that can be earned.
== Australia ==
The SJD is offered by the Australian National University, Bond University, La Trobe University, the University of Canberra, the University of New South Wales, the University of Technology Sydney, and the University of Western Australia.
The University of Sydney stopped accepting new applications for an SJD in 2018.
== Canada ==
In Canada, the JSD or SJD is only offered at University of Toronto Faculty of Law. Other law schools in Canada still offer a PhD in law as the terminal degree.
== Italy ==
In Italy, the title of Doctor of Juridical Science (dottore in scienze giuridiche) is awarded to holders of a Degree in Juridical Sciences (laurea in scienze giuridiche, EQF level 6), while Magistral Doctor of Juridical Sciences (dottore magistrale in scienze giuridiche) is awarded to holders of a Magistral Degree in Juridical Sciences (laurea magistrale in scienze giuridiche, EQF level 7).
Instead, the terminal degree for law, is the research doctorate (PhD, dottorato di ricerca), awarding the title of Research Doctor (dottore di ricerca).
== United States ==
The JSD, or SJD, is a research doctorate, and as such, in contrast to the JD, it is equivalent to the more commonly awarded research doctorate, the PhD. It is the most advanced law degree.
Applicants for the program must have outstanding academic credentials. A professional degree in law (such as a JD) is required, as well as an LLM. Exceptions as to the latter condition (i.e., holding an LLM) are seldom—if ever—granted.
The JSD/SJD typically requires three to five years to complete. The program begins with a combination of required and elective coursework. Then, upon passage of the oral exam, the student advances to doctoral candidacy. Completion of the program requires a dissertation, which serves as an original contribution to the scholarly field of law.
The JSD/SJD is rarely earned by American scholars. The American Bar Association considers the JD a sufficient academic credential for the instruction of the law. This has been adopted by virtually all American law schools, though outstanding academic performance and an extensive record of legal publications are usually required for tenure-track employment at most universities. Most scholars who complete the JSD/SJD at American universities are either international students seeking academic employment in their home countries (where a research doctorate may be required) or American scholars already employed, and who wish to further their legal education at the highest level.
Notable recipients of the degree of Doctor of Juridical Science include:
Erwin Griswold (Harvard, 1929), United States Solicitor General
Mastin Gentry White (Harvard, 1933), Judge on the United States Court of Federal Claims
Francis Mading Deng (Yale, 1968), South Sudanese diplomat
Sang-Hyun Song (Cornell Law School, 1970), President of the International Criminal Court (ICC)
Lobsang Sangay (Harvard, 2004), former President of the Central Tibetan Administration and professor of law at Harvard University
Charles Hamilton Houston (Harvard, 1923), prominent civil rights attorney
Pauli Murray (Yale, 1965), prominent civil rights advocate
Ayala Procaccia (University of Pennsylvania, 1972), Israel Supreme Court Justice
Edward H. Levi (Yale, 1938) President of University of Chicago, US Attorney General
Dionysia-Theodora Avgerinopoulou (Columbia, 2011), member of the Hellenic Parliament
Christos Rozakis (University of Illinois, 1973) (President of the Administrative Tribunal of the Council of Europe and former vice-president of the European Court of Human Rights)
Ma Ying-jeou (Harvard, 1980), former President of the Republic of China (Taiwan)
Theodor Meron (Harvard), professor of law (New York University School of Law) and president of the International Criminal Tribunal for the Former Yugoslavia
Daniel Boorstin (Yale, 1940), American historian
Dhananjaya Y. Chandrachud (Harvard, 1986), The Chief Justice of the Supreme Court of India
Katherine Franke (Yale Law School, 1998), Sulzbacher Professor of Law, Gender, and Sexuality Studies, Columbia University; Director, Center for Gender and Sexuality Law at Columbia Law School; Faculty Director, The Law, Rights, and Religion Project at Columbia Law School
Lucian Bebchuk (Harvard, 1984), William J. Friedman and Alicia Townsend Friedman Professor of Law, Economics, and Finance Director, Program on Corporate Governance, Harvard Law School.
Xue Hanqin (Columbia, 1995), U.N. International Court of Justice judge
== See also ==
Doctor of Law
Legum Doctor (Doctor of Laws; LLD)
Juris Doctor (JD)
Master of Laws (LLM)
Bachelor of Laws (LLB)
Doctor of Canon Law, Catholic Church (JCD)
== Notes and references == | Wikipedia/Doctor_of_Juridical_Science |
A graphical widget (also graphical control element or control) in a graphical user interface is an element of interaction, such as a button or a scroll bar. Controls are software components that a computer user interacts with through direct manipulation to read or edit information about an application. User interface libraries such as Windows Presentation Foundation, Qt, GTK, and Cocoa, contain a collection of controls and the logic to render these.
Each widget facilitates a specific type of user-computer interaction, and appears as a visible part of the application's GUI as defined by the theme and rendered by the rendering engine. The theme makes all widgets adhere to a unified aesthetic design and creates a sense of overall cohesion. Some widgets support interaction with the user, for example labels, buttons, and check boxes. Others act as containers that group the widgets added to them, for example windows, panels, and tabs.
Structuring a user interface with widget toolkits allows developers to reuse code for similar tasks, and provides users with a common language for interaction, maintaining consistency throughout the whole information system.
Graphical user interface builders facilitate the authoring of GUIs in a WYSIWYG manner employing a user interface markup language. They automatically generate all the source code for a widget from general descriptions provided by the developer, usually through direct manipulation.
== History ==
Around 1920, widget entered American English, as a generic term for any useful device, particularly a product manufactured for sale; a gadget.
In 1988, the term widget is attested in the context of Project Athena and the X Window System. In An Overview of the X Toolkit by Joel McCormack and Paul Asente, it says:
The toolkit provides a library of user-interface components ("widgets") like text labels, scroll bars, command buttons, and menus; enables programmers to write new widgets; and provides the glue to assemble widgets into a complete user interface.
The same year, in the manual X Toolkit Widgets - C Language X Interface by Ralph R. Swick and Terry Weissman, it says:
In the X Toolkit, a widget is the combination of an X window or sub window and its associated input and output semantics.
Finally, still in the same year, Ralph R. Swick and Mark S. Ackerman explain where the term widget came from:
We chose this term since all other common terms were overloaded with inappropriate connotations. We offer the observation to the skeptical, however, that the principal realization of a widget is its associated X window and the common initial letter is not un-useful.
== Usage ==
Any widget displays an information arrangement changeable by the user, such as a window or a text box. The defining characteristic of a widget is to provide a single interaction point for the direct manipulation of a given kind of data. In other words, widgets are basic visual building blocks which, combined in an application, hold all the data processed by the application and the available interactions on this data.
GUI widgets are graphical elements used to build the human-machine-interface of a program. GUI widgets are implemented like software components. Widget toolkits and software frameworks, like e.g. GTK+ or Qt, contain them in software libraries so that programmers can use them to build GUIs for their programs.
A family of common reusable widgets has evolved for holding general information based on the Palo Alto Research Center Inc. research for the Xerox Alto User Interface. Various implementations of these generic widgets are often packaged together in widget toolkits, which programmers use to build graphical user interfaces (GUIs). Most operating systems include a set of ready-to-tailor widgets that a programmer can incorporate in an application, specifying how it is to behave. Each type of widget generally is defined as a class by object-oriented programming (OOP). Therefore, many widgets are derived from class inheritance.
In the context of an application, a widget may be enabled or disabled at a given point in time. An enabled widget has the capacity to respond to events, such as keystrokes or mouse actions. A widget that cannot respond to such events is considered disabled. The appearance of a widget typically differs depending on whether it is enabled or disabled; when disabled, a widget may be drawn in a lighter color ("grayed out") or be obscured visually in some way. See the adjacent image for an example.
The benefit of disabling unavailable controls rather than hiding them entirely is that users are shown that the control exists but is currently unavailable (with the implication that changing some other control may make it available), instead of possibly leaving the user uncertain about where to find the control at all. On pop-up dialogues, buttons might appear greyed out shortly after appearance to prevent accidental clicking or inadvertent double-tapping.
Widgets are sometimes qualified as virtual to distinguish them from their physical counterparts, e.g. virtual buttons that can be clicked with a pointer, vs. physical buttons that can be pressed with a finger (such as those on a computer mouse).
A related (but different) concept is the desktop widget, a small specialized GUI application that provides some visual information and/or easy access to frequently used functions such as clocks, calendars, news aggregators, calculators and desktop notes. These kinds of widgets are hosted by a widget engine.
== List of common generic widgets ==
=== Selection and display of collections ===
Button – control which can be clicked upon to perform an action. An equivalent to a push-button as found on mechanical or electronic instruments.
Radio button – control which can be clicked upon to select one option from a selection of options, similar to selecting a radio station from a group of buttons dedicated to radio tuning. Radio buttons always appear in pairs or larger groups, and only one option in the group can be selected at a time; selecting a new item from the group's buttons also de-selects the previously selected button.
Check box – control which can be clicked upon to enable or disable an option. Also called a tick box. The box indicates an "on" or "off" state via a check mark/tick ☑ or a cross ☒. Can be shown in an intermediate state (shaded or with a dash) to indicate that various objects in a multiple selection have different values for the property represented by the check box. Multiple check boxes in a group may be selected, in contrast with radio buttons.
Toggle switch - Functionally similar to a check box. Can be toggled on and off, but unlike check boxes, this typically has an immediate effect.
Toggle Button - Functionally similar to a check box, works as a switch, though appears as a button. Can be toggled on and off.
Split button – control combining a button (typically invoking some default action) and a drop-down list with related, secondary actions
Cycle button - a button that cycles its content through two or more values, thus enabling selection of one from a group of items.
Slider – control with a handle that can be moved up and down (vertical slider) or right and left (horizontal slider) on a bar to select a value (or a range if two handles are present). The bar allows users to make adjustments to a value or process throughout a range of allowed values.
List box – a graphical control element that allows the user to select one or more items from a list contained within a static, multiple line text box.
Spinner – value input control which has small up and down buttons to step through a range of values
Drop-down list – A list of items from which to select. The list normally only displays items when a special button or indicator is clicked.
Menu – control with multiple actions which can be clicked upon to choose a selection to activate
Context menu – a type of menu whose contents depend on the context or state in effect when the menu is invoked
Pie menu – a circular context menu where selection depends on direction
Menu bar – a graphical control element which contains drop down menus
Toolbar – a graphical control element on which on-screen buttons, icons, menus, or other input or output elements are placed
Ribbon – a hybrid of menu and toolbar, displaying a large collection of commands in a visual layout through a tabbed interface.
Combo box (text box with attached menu or List box) – A combination of a single-line text box and a drop-down list or list box, allowing the user to either type a value directly into the control or choose from the list of existing options.
Icon – a quickly comprehensible symbol of a software tool, function, or a data file.
Tree view – a graphical control element that presents a hierarchical view of information
Grid view or datagrid – a spreadsheet-like tabular view of data that allows numbers or text to be entered in rows and columns.
=== Navigation ===
Link – Text with some kind of indicator (usually underlining and/or color) that indicates that clicking it will take one to another screen or page.
Tab – a graphical control element that allows multiple documents or panels to be contained within a single window
Scrollbar – a graphical control element by which continuous text, pictures, or any other content can be scrolled in a predetermined direction (up, down, left, or right)
=== Text/value input ===
Text box – (edit field) - a graphical control element intended to enable the user to input text
=== Output ===
Label – text used to describe another widget
Tooltip – informational window which appears when the mouse hovers over another control
Balloon help
Status bar – a graphical control element which poses an information area typically found at the window's bottom
Progress bar – a graphical control element used to visualize the progression of an extended computer operation, such as a download, file transfer, or installation
Infobar – a graphical control element used by many programs to display non-critical information to a user
=== Container ===
Window – a graphical control element consisting of a visual area containing some of the graphical user interface elements of the program it belongs to
Collapsible panel – a panel that can compactly store content which is hidden or revealed by clicking the tab of the widget.
Drawer: Side sheets or surfaces containing supplementary content that may be anchored to, pulled out from, or pushed away beyond the left or right edge of the screen.
Accordion – a vertically stacked list of items, such as labels or thumbnails where each item can be "expanded" to reveal the associated content
Modal window – a graphical control element subordinate to an application's main window which creates a mode where the main window can not be used.
Dialog box – a small window that communicates information to the user and prompts for a response
Palette window – also known as "Utility window" - a graphical control element which floats on top of all regular windows and offers ready access tools, commands or information for the current application
Inspector window – a type of dialog window that shows a list of the current attributes of a selected object and allows these parameters to be changed on the fly
Frame – a type of box within which a collection of graphical control elements can be grouped as a way to show relationships visually
Canvas – generic drawing element for representing graphical information
Cover Flow – an animated, three-dimensional element to visually flipping through snapshots of documents, website bookmarks, album artwork, or photographs.
Bubble Flow – an animated, two-dimensional element that allows users to browse and interact the entire tree view of a discussion thread.
Carousel (computing) – a graphical widget used to display visual cards in a way that's quick for users to browse, both on websites and on mobile apps
== See also ==
Graphical user interface elements
Geometric primitive
Widget engine for mostly unrelated, physically inspired "widgets"
Widget toolkit – a software library which contains a collection of widgets
Interaction technique
== References ==
== External links ==
Packaged Web Apps (Widgets) - Packaging and XML Configuration (Second Edition) - W3C Recommendation 27 November 2012
Widgets 1.0: The Widget Landscape (Q1 2008). W3C Working Draft 14 April 2008
Requirement For Standardizing Widgets. W3C Working Group Note 27 September 2011 | Wikipedia/Graphical_widget |
A label is a graphical control element which displays text on a form. It is usually a static control; having no interactivity. A label is generally used to identify a nearby text box or other widget. Some labels can respond to events such as mouse clicks, allowing the text of the label to be copied, but this is not standard user-interface practice. Labels usually cannot be given the focus.
There is also a similar control known as a link label. Unlike a standard label, a link label looks and acts like a hyperlink, and can be selected and activated. This control may have features such as changing colour when clicked or hovered over.
== References == | Wikipedia/Label_(control) |
Design-oriented programming is a way to author computer applications using a combination of text, graphics, and style elements in a unified code-space. The goal is to improve the experience of program writing for software developers, boost accessibility, and reduce eye-strain. Good design helps computer programmers to quickly locate sections of code using visual cues typically found in documents and web page authoring.
User interface design and graphical user interface builder research are the conceptual precursors to design-oriented programming languages. The former focus on the software experience for end users of the software application and separate editing of the user interface from the code-space. The important distinction is that design-oriented programming involves user experience of programmers themselves and fully merges all elements into a single unified code-space.
== See also ==
User interface design
Graphical user interface builder
Elements of graphical user interfaces
Visual programming language
Experience design
User experience design
Usability
== References ==
Visual programming
Intro to Design-Oriented Programming Languages | Wikipedia/Design-Oriented_Programming |
GEM (for Graphics Environment Manager) is a discontinued operating environment released by Digital Research in 1985. GEM is known primarily as the native graphical user interface of the Atari ST series of computers, providing a WIMP desktop. It was also available for IBM PC compatibles and shipped with some models from Amstrad. GEM is used as the core for some commercial MS-DOS programs, the most notable being Ventura Publisher. It was ported to other computers that previously lacked graphical interfaces, but never gained traction. The final retail version of GEM was released in 1988.
Digital Research later produced X/GEM for their FlexOS real-time operating system with adaptations for OS/2 Presentation Manager and the X Window System under preparation as well.
== History ==
=== GSX ===
In late 1984, GEM started life at DRI as an outgrowth of a more general-purpose graphics library known as GSX (Graphics System Extension), written by a team led by Don Heiskell since about 1982. Lee Jay Lorenzen (at Graphic Software Systems) who had recently left Xerox PARC (the birthplace of the modern GUI) wrote much of the code. GSX was essentially a DRI-specific implementation of the GKS graphics standard proposed in the late 1970s. GSX was intended to allow DRI to write graphics programs (charting, etc.) for any of the 8-bit and 16-bit platforms CP/M-80, Concurrent CP/M, CP/M-86 and MS-DOS (NEC APC-III) would run on, a task that otherwise would have required considerable effort to port due to the large differences in graphics hardware (and concepts) between the various systems of that era.
GSX consisted of two parts: a selection of routines for common drawing operations, and the device drivers that are responsible for handling the actual output. The former was known as GDOS (Graphics Device Operating System) and the latter as GIOS (Graphics Input/Output System), a play on the division of CP/M into the machine-independent BDOS (Basic Disk Operating System) and the machine-specific BIOS (Basic Input/Output System). GDOS was a selection of routines that handled the GKS drawing, while GIOS actually used the underlying hardware to produce the output.
==== Known 8-bit device drivers ====
DDMODE0 Amstrad CPC screen in mode 0
DDMODE1 Amstrad CPC screen in mode 1
DDMODE2 Amstrad CPC screen in mode 2
DDSCREEN Amstrad PCW screen
DDBBC0 BBC Micro screen in mode 0
DDBBC1 BBC Micro screen in mode 1
DDGDC, DDNCRDMV NEC μPD7220
DDVRET VT100 + Retro-Graphics GEN.II (aka 4027/4010)
DDTS803 TeleVideo screen
DDHP26XX HP 2648 and 2627 terminals
DDQX10 QX-10 screen
DDFXLR8 Epson lo-res, 8-bit
DDFXHR8 Epson hi-res, 8-bit
DDFXLR7 Epson and Epson-compatible printers
DDCITOLR C. Itoh 8510A lo-res
DDCITOH C. Itoh 8510A
DD-DMP1 Amstrad DMP1 printer (aka Seikosha GP500M-2)
DDSHINWA Printers using Shinwa Industries mechanism
DDHP7470, DD7470 Hewlett-Packard HP 7470 and compatible pen plotters, HP-GL/2
DD7220 Hewlett-Packard HP 7220, HP-GL
DDGEN2 Retro-Graphics GEN.II (Ratfor source code in Programmer's Guide)
DDHI3M Houston Instrument HiPlot DMP
DDHI7M Houston Instrument HiPlot DMP
DDMX80 Epson MX-80 + Graftrax Plus
DDESP Electric Studio Light Pen (Amstrad PCW)
DDOKI84 Oki Data Microline
DDMF GEM metafile
DDPS PostScript metafile
==== Known 16-bit device drivers ====
DDLA100 DEC
DDLA50 DEC
DDNECAPC NEC APC
NCRPC4 NCR DecisionMate V
IBMBLMP2, IBMBLMP3 IBM CGA monochrome mode
IBMBLCP2, IBMBLCP3 IBM CGA color mode
IBMCHMP6
IBMEHFP6, IBMEHMP6, IBMELFP6 IBM Enhanced Graphics Adapter
HERMONP2, IBMHERP3, HERMONP6, Hercules Graphics Card (720×348)
UM85C408AF UMC VGA Graphics
DDIDSM IDS Monochrome
DDANADXM Anadex DP-9501 and DP-9001A
DDCITOLR C. Itoh 8510A lo-res
DDCNTXM Centronics 351, 352 and 353
DDDS180 Datasouth
DDOKI84 Oki Data Microline
DDPMVP Printronix MVP
DD3EPSNL IBM/Epson FX-80 lo-res Printer (see DDFXLR7 and DDFXLR8)
DD3EPSNH IBM/Epson FX-80 hi-res Printer (see DDFXHR8)
DD75XHM1 Regnecentralen RC759 Piccoline
DDGSXM Metafile
EPSMONH6
IBMHP743 Hewlett-Packard 7470A/7475A Plotter (see DDHP7470 and DD7470)
METAFIL6 Metafile
PALETTE Polaroid camera
The DOS version of GSX supports loading drivers in the CP/M-86 CMD format. Consequently, the same driver binary may operate under both CP/M-86 and DOS.
=== GEM ===
==== Intel versions ====
The 16-bit version of GSX 1.3 evolved into one part of what would later be known as GEM, which was an effort to build a full GUI system using the earlier GSX work as its basis. Originally known as Crystal as a play on an IBM project called Glass, the name was later changed to GEM.
Under GEM, GSX became GEM VDI (Virtual Device Interface), responsible for basic graphics and drawing. VDI also added the ability to work with multiple fonts and added a selection of raster drawing commands to the formerly vector-only GKS-based drawing commands. VDI also added multiple viewports, a key addition for use with windows.
A new module, GEM AES (Application Environment Services), provided the window management and UI elements, and GEM Desktop used both libraries in combination to provide a GUI. The 8086 version of the entire system was first officially demoed at COMDEX in November 1984, following a demonstration on the 80286-based Acorn Business Computer in September 1984 where the software had been attributed to Acorn, and the system was shipped as GEM/1 on 28 February 1985.
===== GEM/1 =====
GEM Desktop 1.0 was released on 28 February 1985.
GEM Desktop 1.1 was released on 10 April 1985 with support for CGA and EGA displays.
A version for the Apricot Computers F-Series, supporting 640×200 in up to 8 colors, was also available as GEM Desktop 1.2.
Digital Research also positioned Concurrent DOS 4.1 with GEM as alternative for IBM's TopView.
DRI originally designed GEM for DOS so that it would check for and only run on IBM computers, and not PC compatibles like those from Compaq, as the company hoped to receive license fees from compatible makers. Developers reacted with what BYTE described as "a small explosion"; it reported that at a DRI-hosted seminar in February 1985, more than half of the attendees agreed that GEM's incompatibility with Compaq was a serious limitation. Later that month the company removed the restriction. Applications that supported GEM included Lifetree Software's GEM Write.
At this point, Apple Computer sued DRI in what would turn into a long dispute over the "look and feel" of the GEM/1 system, which was an almost direct copy of Macintosh (with some elements bearing a closer resemblance to those in the earlier Lisa, available since January 1983). This eventually led to DRI being forced to change several basic features of the system. (See also: Apple v. Digital Research.) Apple would later go on to sue other companies for similar issues, including their copyright lawsuit against Microsoft and HP.
In addition to printers the system also contained drivers for some more unusual devices such as the Polaroid Palette.
===== GEM/2 =====
DRI responded with the "lawsuit-friendly" GEM Desktop 2.0, released on 24 March 1986, which eventually added support for VGA, sometime after its release in 1987. It allowed the display of only two fixed windows on the "desktop" (though other programs could do what they wished), changed the trash can icon, and removed the animations for things like opening and closing windows. It was otherwise similar to GEM/1, but also included a number of bug fixes and cosmetic improvements.
In 1988 Stewart Alsop II said that GEM was among several GUIs that "have already been knocked out" of the market by Apple, IBM/Microsoft, and others.
===== GEM XM =====
GEM XM with "GEM Desktop 3.0" was an updated version of GEM/2 in 1986/1987 for DOS (including DOS Plus) which allowed task-switching and the ability to run up to ten GEM and DOS programs at once, swapping out to expanded memory (XM) through EMS/EEMS or to disk (including RAM disks, thereby also allowing the use of extended memory). Data could be copied and pasted between applications through a clipboard with filter function (a feature later also found in TaskMAX under DR DOS 6.0). Digital Research planned to offer GEM XM as an option to GEM Draw Plus users and through OEM channels.
The GEM XM source code is now freely available under the terms of GNU General Public License.
===== GEM/3 =====
The last retail release was GEM/3 Desktop, released on 3 November 1988, which had speed improvements and shipped with a number of basic applications. Commercial sales of GEM ended with GEM/3; the source code was subsequently made available to a number of DRI's leading customers.
While GEM/2 for the PC still provided a GSX API in addition to the GEM API; GEM/3 no longer did.
===== GEM/4 for CCP Artline =====
GEM/4, released in 1990, included the ability to work with Bézier curves, a feature still not commonly found outside the PostScript world. This version was produced specifically for Artline 2, a drawing program from the German company CCP Development GmbH.
The system also included changes to the font management system, which made it incompatible with the likes of Timeworks Publisher.
Artline 1 still ran on GEM 3.1.
===== GEM/5 for GST Timeworks Publisher =====
Another version of GEM called GEM/5 was produced by GST Software Products for Timeworks' Publisher 2.1. It contained an updated look with 3D buttons, along with features such as on-the-fly font scaling. It came complete with all the standard GEM 3.1 tools. This version was produced from GEM 3.13 with only the Bézier handling taken from GEM/4.
===== ViewMAX for DR DOS =====
GEM Desktop itself was spun off in 1990 as a product known as ViewMAX which was used solely as a file management shell under DR DOS. In this form the system could not run other GEM programs. This led to a situation where a number of applications (including ViewMAX) could exist all with their own statically linked copy of the GEM system. This scenario was actually rare, as few native GEM programs were published. In 1991, ViewMAX 2 was released.
In these forms, GEM survived until DRI was purchased by Novell in June 1991 and all GEM development was cancelled.
===== X/GEM =====
Throughout this time DRI had also been working on making the GEM system capable of multitasking. This started with X/GEM based on GEM/1, but this required use of one of the multitasking CP/M-based operating systems. DRI also produced X/GEM for their FlexOS real-time operating system with adaptations for OS/2 Presentation Manager and the X Window System under preparation as well.
===== Ventura Publisher =====
Lee Lorenzen left soon after the release of GEM/1, when it became clear that DRI had no strong interest in application development. He then joined with two other former DRI employees, Don Heiskell and John Meyer, to start Ventura Software. They developed Ventura Publisher (which was later marketed by Xerox and eventually by Corel), which would go on to be a very popular desktop publishing program for some time.
==== Atari versions ====
Development of the production 68000 version of GEM began in September 1984, when Atari sent a team called "The Monterey Group" to Digital Research to begin work on porting GEM. Originally, the plan was to run GEM on top of CP/M-68K, both ostensibly ported to Motorola 68000 by DRI prior to the ST design being created. In fact, these ports were unusable and would require considerable development. Digital Research also offered GEMDOS (originally written as GEM DOS, it was also called "Project Jason"), a DOS-like operating system aimed to port GEM to different hardware platforms. It was available for 8086 and 68000 processors and had been adapted to the Apple Lisa 2/5 and the Motorola VME/10 development system. Atari decided in January 1985 to give up on the existing CP/M-68K code and instead port DRI GEMDOS to the Atari ST platform, referring to it as TOS.
As Atari had provided most of the development of the 68000 version, they were given full rights to continued developments without needing to reverse-license it back to DRI. As a result, the Apple-DRI lawsuit did not apply to the Atari versions of GEM, and they were allowed to keep a more Mac-like UI.
Over the next seven years, from 1985 to 1992, new versions of TOS were released with each new generation of the ST line. Updates included support for more colors and higher resolutions in the raster-side of the system, but remained generally similar to the original in terms of GKS support. In 1992, Atari released TOS 4, or MultiTOS, along with their final computer system, the Falcon030. In combination with MiNT, TOS 4 allowed full multitasking support in GEM.
==== Continued development ====
When Caldera bought the remaining Digital Research assets from Novell on 23 July 1996, initial plans were to revive GEM and ViewMAX technologies for a low-footprint user interface for OpenDOS in mobile applications as Caldera View, but these plans were abandoned by Caldera UK in favour of DR-WebSpyder and GROW. Caldera Thin Clients (later known as Lineo) released the source to GEM and GEM XM under the terms of GNU GPL-2.0-only in April 1999. The development of GEM for PC continues as FreeGEM and OpenGEM.
On the Atari ST platform, the original DRI sources were ported again to be used in the free and open source TOS clone EmuTOS. New implementations of the AES portions of GEM have been implemented from scratch in the form of XaAES, and MyAES, both of which are fully re-entrant and support multitasking on top of the FreeMiNT multitasking extensions to TOS.
== Description ==
The "full" GEM system consisted of three main parts:
GEM VDI (Virtual Device Interface)
GEM AES (Application Environment Services)
GEM Desktop (an application providing drag-and-drop file management)
GEM VDI was the core graphics system of the overall GEM engine. It was responsible for "low level" drawing in the form of "draw line from here to here". VDI included a resolution and coordinate independent set of vector drawing instructions which were called from applications through a fairly simple interface. VDI also included environment information (state, or context), current color, line thickness, output device, etc.
These commands were then examined by GDOS, whose task it was to send the commands to the proper driver for actual rendering. For instance, if a particular GEM VDI environment was connected to the screen, the VDI instructions were then routed to the screen driver for drawing. Simply changing the environment to point to the printer was all that was needed (in theory) to print, dramatically reducing the developer workload (they formerly had to do printing "by hand" in all applications). GDOS was also responsible for loading up the drivers and any requested fonts when GEM was first loaded.
One major advantage VDI provided over the Macintosh was the way multiple devices and contexts were handled. In the Mac such information was stored in memory inside the application. This resulted in serious problems when attempting to make the Mac handle pre-emptive multitasking, as the drawing layer (QuickDraw) needed to have direct memory access into all programs. In GEM VDI however, such information was stored in the device itself, with GDOS creating "virtual devices" for every context – each window for instance.
GEM AES provided the window system, window manager, UI style and other GUI elements (widgets). For performance reasons, many of the GUI widgets were actually drawn using character graphics. Compared to the Macintosh, AES provided a rather spartan look and the system shipped with a single monospaced font.
AES performs its operations by calling the VDI, but in a more general sense the two parts of GEM were often completely separated in applications. Applications typically called AES commands to set up a new window, with the rest of the application using VDI calls to actually draw into that window.
GEM Desktop was an application program that used AES to provide a file manager and launcher, the traditional "desktop" environment that users had come to expect from the Macintosh. Unlike the Macintosh, the GEM Desktop ran on top of DOS (MS-DOS, DOS Plus or DR DOS on the PC, GEMDOS/TOS on the Atari), and as a result the actual display was cluttered with computer-like items, including path names and wildcards. In general, GEM was much more "geeky" than the Mac, but simply running a usable shell on DOS was a huge achievement on its own. Otherwise, GEM has its own advantages over Mac OS such as proportional sliders.
Native PC GEM applications use the file extension .APP for executables, whereas GEM desktop accessories use the file extension .ACC instead. All desktop accessories (and also a few simple applications) can be run under ViewMAX without modification.
== See also ==
Atari TOS
EmuTOS
FreeGEM
OpenGEM
GEM character set
Atari ST character set
Resource construction set (RCS)
Pantone Color Computer Graphics
== References ==
== Further reading ==
Apricot Portable - Technical Reference Manual. Vol. Section 3: Software. ACT (International) Limited. 1984. Retrieved 2020-01-13. [5][6] (228 pages)
GSX Graphics Extension - Programmer's Guide (PDF) (2 ed.). Digital Research Inc. September 1983. 5000-2024. Archived (PDF) from the original on 2020-02-11. Retrieved 2020-01-13. [7][8][9]
== External links ==
GEM - history, documentation and links to various open-source GEM projects
Afros - a distribution of Atari OS components (consisting of for example EmuTOS), aimed specifically at ARAnyM
Aranym Atari Running on Any Machine: an open source emulator/virtual machine that can run Atari GEM applications
"GEM : THE PROJECT".
Creating of TOS (part 1) Archived 2011-05-12 at the Wayback Machine Landon Dyer, one of original member of "The Monterey Group"
Creating of TOS (part 2) Archived 2010-09-21 at the Wayback Machine Landon Dyer, one of original member of "The Monterey Group"
GEM demo 1985 Most of the program is about the MAC
John C. Elliott. "Intel GEM main page". | Wikipedia/Graphics_Environment_Manager |
Glade Interface Designer is a graphical user interface builder for GTK, with additional components for GNOME. In its third version, Glade is programming language–independent, and does not produce code for events, but rather an XML file that is then used with an appropriate binding (such as GtkAda for use with the Ada programming language).
Glade is free and open-source software distributed under the GNU General Public License. Glade's development and maintenance ceased in 2022, with the final release on 10 August 2022.
== History and development ==
The first Glade release, version 0.1, was made on 18 April 1998.
Glade 3 was released on 12 August 2006. According to the Glade Web site, the most noticeable differences for the end-user are:
Undo and redo support in all operations.
Support for multiple open projects.
Removal of code generation.
Contextual help system with Devhelp
Most of the difference is in the internals. Glade-3 is a complete rewrite, in order to take advantage of the new features of GTK+ 2 and the GObject system (Glade-3 was started when Glade-1 hadn't yet been ported to GTK+ 2). Therefore, the Glade-3 codebase is smaller and allows new interesting things, including:
Catalogs of "pluggable" widgets. This means that external libraries can provide their set of widgets at runtime and Glade will detect them. In fact, Glade 3 supports only standard GTK widgets; GNOME UI and DB widgets are provided separately.
The various Glade Tools (palette, editor, etc.) are implemented as widgets. This allows for easier integration in IDEs like Anjuta, and makes it easier to change the Glade UI.
On 5 April 2011, two parallel installable stable Glade versions were released:
Glade 3.8: That includes all support for GTK+ up till version 2.24. This version is to serve as a decent migration path for older projects migrating to GTK+ 3.0.
Glade 3.10: That includes support only for widgets that are still included in GTK+ 3.0 and additionally drops support for Libglade.
On 11 June 2015 Glade 3.19.0 was released. It depends at least on GTK+ 3.16.0. Among many bug fixes this version is the first to support the widgets GtkStack, GtkHeaderBar and GtkSidebar.
== GtkBuilder ==
GtkBuilder is the XML format that the Glade Interface Designer uses to save its forms. These documents can then be used in conjunction with the GtkBuilder object to instantiate the form using GTK. GladeXML is the XML format that was used with conjunction with libglade, which is now deprecated.
Glade Interface Designer automatically generates all the source code for a graphical control element.
The "Gtk.Builder class" allows user interfaces to be designed without writing code. The class describes the interface in an Extensible Markup Language (XML) file and then loads the XML description at runtime and creating the objects automatically. The Glade Interface Designer allows creation of the user interface in a WYSIWYG manner. The description of the user interface is independent from the programming language being used.
== Code sketching ==
Code sketchers are software applications that help a user create source code from a GladeXML file. Most code sketchers create source code which uses libglade and a GladeXML file to create the GUI. Some sketchers are able to create raw code that does not need the GladeXML file. The table below compares basic information about GladeXML code sketcher packages.
== Cambalache ==
Cambalache (/kambaˈlat͡ʃe/) is a free and open-source rapid application development (RAD) tool designed for creating user interfaces with GTK 4. It is designed as a successor to Glade, with a focus on supporting the GTK 4 library while maintaining compatibility with GTK 3. Cambalache is geared toward developers working within the GNOME ecosystem. Cambalache's design emphasizes the Model-View-Controller (MVC) architecture, ensuring separation between the UI components and the business logic of applications.The UI editing workspace is driven by a separate process called Merengue which interfaces with Casilda, a Wayland compositor embedded in a GTK widget. This architectural choice improves stability by separating the user interface preview from the main application. This separation enables the system to handle different GTK versions efficiently, ensuring the rendered UI accurately mirrors the application's appearance and behavior.
== See also ==
List of language bindings for GTK
Interface Builder
Microsoft Blend
Qt Designer
XUL
== References ==
== External links ==
Official website
Old binaries for Windows on SourceForge
Old binaries for OS X | Wikipedia/Glade_Interface_Designer |
OutSystems is a low-code development platform which provides tools for companies to develop, deploy and manage omnichannel enterprise applications.
OutSystems was founded in 2001 in Lisbon, Portugal.
In June 2018 OutSystems secured a $360M round of funding from KKR and Goldman Sachs and reached the status of Unicorn.
In February 2021 OutSystems raised another $150M investment from a round co-led by Abdiel Capital and Tiger Global Management, having a total valuation of $9.5 Billion. Ionic was acquired by OutSystems in November 2022.
OutSystems is a member of the Consortium of IT Software Quality (CISQ).
== Products ==
OutSystems is a low-code development platform for the development of mobile and web enterprise applications, which run in the cloud, on-premises or in hybrid environments.
In 2014 OutSystems launched a free version of the platform that provides developers with personal cloud environments to create and deploy web and mobile applications without charge. The current version is 11.53, for both the paid and unpaid versions.
== References ==
== External links ==
Official website | Wikipedia/OutSystems |
User interface modeling is a development technique used by computer application programmers. Today's user interfaces (UIs) are complex software components, which play an essential role in the usability of an application. The development of UIs requires therefore, not only guidelines and best practice reports, but also a development process including the elaboration of visual models and a standardized notation for this visualization.
The term user interface modeling is mostly used in an information technology context. A user interface model is a representation of how the end user(s) interact with a computer program or another device and also how the system responds. The modeling task is then to show all the "directly experienced aspects of a thing or device" [Trætteberg2002].
Modeling user interfaces is a well-established discipline in its own right. For example, modeling techniques can describe interaction objects, tasks, and lower-level dialogs in user interfaces. Using models as part of user interface development can help capture user requirements, avoid premature commitment to specific layouts and widgets, and make the relationships between an interface's different parts and their roles explicit. [SilvaPaton2003].
== Languages ==
=== MARIA ===
MARIA XML (Model-based lAnguage foR Interactive Applications) is a universal, declarative, multiple abstraction level, XML-based user interface markup language for modelling interactive applications in ubiquitous environments.
=== UML ===
Some aspects of user interface modeling can be realized using UML. However, the language is not mainly intended for this kind of modeling, which may render the models somewhat synthetic.
=== UMLi ===
UMLi is an extension of UML, and adds support for representation commonly occurring in user interfaces.
Because application models in UML describe few aspects of user interfaces,
and because the model-based user interface development environments (MB-UIDE)
lack ability for modeling applications, the University of Manchester started the research project UMLi in 1998.
UMLi aims to address this problem of designing and implementing user interfaces using a combination of UML and MB-UIDE.
=== UsiXML ===
UsiXML (USer Interface eXtensible Markup Language) is an XML-based specification language for user interface design.
It supports the description of UI for multiple contexts of use such as Character User Interfaces (CUIs), Graphical
User Interfaces (GUIs), Auditory User Interfaces, and Multimodal User Interfaces.
=== DiaMODL ===
DiaMODL combines a dataflow-oriented language (Pisa interactor abstraction) with UML Statecharts which has focus on behavior. It is capable of modeling the dataflow as well as the behavior of interaction objects. It may be used for documenting the function and structure of concrete user interfaces.
=== Himalia ===
Himalia combines the Hypermedia Models with the control/composite paradigm. It is a full user interface language, it may be used for specifying but also for running it, because of this the designer tool can categorized as a guilder.[1]
== Model types ==
The different aspects of a user interface requires different model types. Some of the models that may be considered for UI-modeling are:
Domain model, including data model (defines the objects that a user can view, access and manipulate through the user interface)
Navigation model, defines how the objects that a user view could be navigated through the user interface
Task model. (describes the tasks an end user performs and dictates what interaction capabilities must be designed)
User model (represents the different characteristics of end users and the roles they are playing within the organization)
Platform model (used to model the physical devices that are intended to host the application and how they interact with each other)
Dialogue model (how users can interact with the objects presentation (as push buttons, commands, etc.), with interaction media (as voice input, touch screen, etc.) and the reactions that the user interface communicates via these objects)
Presentation model (application appearance, representation of the visual, haptic and auditory elements that the user interface offers to its users)
Application model (commands and data the application provides)
UML can be used for several of the models mentioned above with varying degree of success, but it lacks support for user modeling, platform modeling and presentation model.
== Approaches ==
There exist several approaches to modeling a user interface.
=== Usage-centered design ===
In usage-centered design, the modeling task is to show how the actual presentation of a planned system and how the user interaction is supposed to happen. This is probably the most praised approach, and it has been used successfully on a variety of small and large-scale projects. Its strengths are in complex problems.
== Alternative approaches to model-based UIs ==
The known issues of model-based approaches include information restatement and lack of mechanisms to effectively to solve cross-cutting concerns [Cerny2013]. Model-based solutions can work well on their own, but integration with alternative approaches brings complexity in development and maintenance efforts.
=== Code-inspection based ===
These approaches are based on existing general purpose language (GPL) code bases [Cerny2012]. They inspect the code through meta-programming and assemble a structural model that is transformed to the UI. This approach addresses information restatement. These approaches does not fit to adaptive and context-aware UIs.
=== Generative programming ===
These approaches connect domain methods with GPL [Generative programming]. Cross-cutting concerns are addressed at compile-time, which does not directly accommodate future adaptive UIs needing runtime information.
=== Aspect-based UIs ===
Aspect-based solution suggested by [Cerny2013][Cerny2013a][AspectFaces] integrates advantages of code-inspection based and generative programming approaches. It inspects existing code and applies aspect oriented methods to address cross-cutting concerns. It works at runtime, reduces information restatement and at the same time separates UI concerns which allows to reuse each independent of others. In the study at [Cerny2013] authors reduce UI code by 32% through aspect-based UI approach applied to a production system. Main advantages are templating for adjusting the presentation, separate definitions of concerns and mostly generic transformation rules applicable across various data.
=== Content models ===
Models of this kind show the contents of a user interface and its different components. Aesthetics and behavior details are not included in this kind of model as it is a form of usage-centered design model.
== See also ==
Cognitive ergonomics
== References ==
[Paternò 2005] – F Paternò, Model-based tools for pervasive usability, Interacting with Computers 17 (3), 291-315
[Trætteberg2002] – H. Trætteberg, Model-based User Interface Design, Doctoral thesis, Norwegian University of Science and Technology, 2002
[SilvaPaton2003] – P. Pinheiro da Silva, N. W. Paton, User Interface Modeling in UMLi, Stanford University / University of Manchester, 2003
[Markopoulos1997] – P. Markopoulos, A compositional model for the formal specification of user interface software, Doctoral thesis, Queen Mary and Westfield College University of London, 1997
[Trevisan2003] – D. Trevisan, J. Vanderdonck, B. Macq, Model-Based Approach and Augmented Reality Systems, Université catholique de Louvain, 1348 Louvain-la-Neuve, Belgium, 2003
[wwwUMLi] – The Unified Modeling Language for Interactive Applications
[Cerny2013] – Černý, T. - Čemus, K. - Donahoo, M.J. - Song, M.J.: Aspect-driven, Data-reflective and Context-aware User Interfaces Design (page 53). In: ACM SIGAPP Applied Computing Review [online, 2013, vol. 13, no. 4, p. 53-65, ISSN 1559-6915.
[Cerny2013a] – Černý, T. - Donahoo, M.J. - Song, E.: Towards Effective Adaptive User Interfaces Design, Proceedings of the 2013 Research in Applied Computation Symposium (RACS 2013), Montreal: ACM, 2013, ISBN 978-1-4503-2348-2.
[AspectFaces] – "AspectFaces". Coding Crayons s.r.o. Archived from the original on 2 Feb 2019.
[Cerny2012] – T. Cerny and E. Song. Model-driven Rich Form Generation. Information: An International Interdisciplinary Journal, 15(7, SI):2695–2714, JUL 2012.
[Generative programming] – Krzysztof Czarnecki and Ulrich W. Eisenecker. 2000. Generative Programming: Methods, Tools, and Applications. ACM Press/Addison-Wesley Publ. Co., New York, NY, USA. | Wikipedia/User_interface_modeling |
A graphical widget (also graphical control element or control) in a graphical user interface is an element of interaction, such as a button or a scroll bar. Controls are software components that a computer user interacts with through direct manipulation to read or edit information about an application. User interface libraries such as Windows Presentation Foundation, Qt, GTK, and Cocoa, contain a collection of controls and the logic to render these.
Each widget facilitates a specific type of user-computer interaction, and appears as a visible part of the application's GUI as defined by the theme and rendered by the rendering engine. The theme makes all widgets adhere to a unified aesthetic design and creates a sense of overall cohesion. Some widgets support interaction with the user, for example labels, buttons, and check boxes. Others act as containers that group the widgets added to them, for example windows, panels, and tabs.
Structuring a user interface with widget toolkits allows developers to reuse code for similar tasks, and provides users with a common language for interaction, maintaining consistency throughout the whole information system.
Graphical user interface builders facilitate the authoring of GUIs in a WYSIWYG manner employing a user interface markup language. They automatically generate all the source code for a widget from general descriptions provided by the developer, usually through direct manipulation.
== History ==
Around 1920, widget entered American English, as a generic term for any useful device, particularly a product manufactured for sale; a gadget.
In 1988, the term widget is attested in the context of Project Athena and the X Window System. In An Overview of the X Toolkit by Joel McCormack and Paul Asente, it says:
The toolkit provides a library of user-interface components ("widgets") like text labels, scroll bars, command buttons, and menus; enables programmers to write new widgets; and provides the glue to assemble widgets into a complete user interface.
The same year, in the manual X Toolkit Widgets - C Language X Interface by Ralph R. Swick and Terry Weissman, it says:
In the X Toolkit, a widget is the combination of an X window or sub window and its associated input and output semantics.
Finally, still in the same year, Ralph R. Swick and Mark S. Ackerman explain where the term widget came from:
We chose this term since all other common terms were overloaded with inappropriate connotations. We offer the observation to the skeptical, however, that the principal realization of a widget is its associated X window and the common initial letter is not un-useful.
== Usage ==
Any widget displays an information arrangement changeable by the user, such as a window or a text box. The defining characteristic of a widget is to provide a single interaction point for the direct manipulation of a given kind of data. In other words, widgets are basic visual building blocks which, combined in an application, hold all the data processed by the application and the available interactions on this data.
GUI widgets are graphical elements used to build the human-machine-interface of a program. GUI widgets are implemented like software components. Widget toolkits and software frameworks, like e.g. GTK+ or Qt, contain them in software libraries so that programmers can use them to build GUIs for their programs.
A family of common reusable widgets has evolved for holding general information based on the Palo Alto Research Center Inc. research for the Xerox Alto User Interface. Various implementations of these generic widgets are often packaged together in widget toolkits, which programmers use to build graphical user interfaces (GUIs). Most operating systems include a set of ready-to-tailor widgets that a programmer can incorporate in an application, specifying how it is to behave. Each type of widget generally is defined as a class by object-oriented programming (OOP). Therefore, many widgets are derived from class inheritance.
In the context of an application, a widget may be enabled or disabled at a given point in time. An enabled widget has the capacity to respond to events, such as keystrokes or mouse actions. A widget that cannot respond to such events is considered disabled. The appearance of a widget typically differs depending on whether it is enabled or disabled; when disabled, a widget may be drawn in a lighter color ("grayed out") or be obscured visually in some way. See the adjacent image for an example.
The benefit of disabling unavailable controls rather than hiding them entirely is that users are shown that the control exists but is currently unavailable (with the implication that changing some other control may make it available), instead of possibly leaving the user uncertain about where to find the control at all. On pop-up dialogues, buttons might appear greyed out shortly after appearance to prevent accidental clicking or inadvertent double-tapping.
Widgets are sometimes qualified as virtual to distinguish them from their physical counterparts, e.g. virtual buttons that can be clicked with a pointer, vs. physical buttons that can be pressed with a finger (such as those on a computer mouse).
A related (but different) concept is the desktop widget, a small specialized GUI application that provides some visual information and/or easy access to frequently used functions such as clocks, calendars, news aggregators, calculators and desktop notes. These kinds of widgets are hosted by a widget engine.
== List of common generic widgets ==
=== Selection and display of collections ===
Button – control which can be clicked upon to perform an action. An equivalent to a push-button as found on mechanical or electronic instruments.
Radio button – control which can be clicked upon to select one option from a selection of options, similar to selecting a radio station from a group of buttons dedicated to radio tuning. Radio buttons always appear in pairs or larger groups, and only one option in the group can be selected at a time; selecting a new item from the group's buttons also de-selects the previously selected button.
Check box – control which can be clicked upon to enable or disable an option. Also called a tick box. The box indicates an "on" or "off" state via a check mark/tick ☑ or a cross ☒. Can be shown in an intermediate state (shaded or with a dash) to indicate that various objects in a multiple selection have different values for the property represented by the check box. Multiple check boxes in a group may be selected, in contrast with radio buttons.
Toggle switch - Functionally similar to a check box. Can be toggled on and off, but unlike check boxes, this typically has an immediate effect.
Toggle Button - Functionally similar to a check box, works as a switch, though appears as a button. Can be toggled on and off.
Split button – control combining a button (typically invoking some default action) and a drop-down list with related, secondary actions
Cycle button - a button that cycles its content through two or more values, thus enabling selection of one from a group of items.
Slider – control with a handle that can be moved up and down (vertical slider) or right and left (horizontal slider) on a bar to select a value (or a range if two handles are present). The bar allows users to make adjustments to a value or process throughout a range of allowed values.
List box – a graphical control element that allows the user to select one or more items from a list contained within a static, multiple line text box.
Spinner – value input control which has small up and down buttons to step through a range of values
Drop-down list – A list of items from which to select. The list normally only displays items when a special button or indicator is clicked.
Menu – control with multiple actions which can be clicked upon to choose a selection to activate
Context menu – a type of menu whose contents depend on the context or state in effect when the menu is invoked
Pie menu – a circular context menu where selection depends on direction
Menu bar – a graphical control element which contains drop down menus
Toolbar – a graphical control element on which on-screen buttons, icons, menus, or other input or output elements are placed
Ribbon – a hybrid of menu and toolbar, displaying a large collection of commands in a visual layout through a tabbed interface.
Combo box (text box with attached menu or List box) – A combination of a single-line text box and a drop-down list or list box, allowing the user to either type a value directly into the control or choose from the list of existing options.
Icon – a quickly comprehensible symbol of a software tool, function, or a data file.
Tree view – a graphical control element that presents a hierarchical view of information
Grid view or datagrid – a spreadsheet-like tabular view of data that allows numbers or text to be entered in rows and columns.
=== Navigation ===
Link – Text with some kind of indicator (usually underlining and/or color) that indicates that clicking it will take one to another screen or page.
Tab – a graphical control element that allows multiple documents or panels to be contained within a single window
Scrollbar – a graphical control element by which continuous text, pictures, or any other content can be scrolled in a predetermined direction (up, down, left, or right)
=== Text/value input ===
Text box – (edit field) - a graphical control element intended to enable the user to input text
=== Output ===
Label – text used to describe another widget
Tooltip – informational window which appears when the mouse hovers over another control
Balloon help
Status bar – a graphical control element which poses an information area typically found at the window's bottom
Progress bar – a graphical control element used to visualize the progression of an extended computer operation, such as a download, file transfer, or installation
Infobar – a graphical control element used by many programs to display non-critical information to a user
=== Container ===
Window – a graphical control element consisting of a visual area containing some of the graphical user interface elements of the program it belongs to
Collapsible panel – a panel that can compactly store content which is hidden or revealed by clicking the tab of the widget.
Drawer: Side sheets or surfaces containing supplementary content that may be anchored to, pulled out from, or pushed away beyond the left or right edge of the screen.
Accordion – a vertically stacked list of items, such as labels or thumbnails where each item can be "expanded" to reveal the associated content
Modal window – a graphical control element subordinate to an application's main window which creates a mode where the main window can not be used.
Dialog box – a small window that communicates information to the user and prompts for a response
Palette window – also known as "Utility window" - a graphical control element which floats on top of all regular windows and offers ready access tools, commands or information for the current application
Inspector window – a type of dialog window that shows a list of the current attributes of a selected object and allows these parameters to be changed on the fly
Frame – a type of box within which a collection of graphical control elements can be grouped as a way to show relationships visually
Canvas – generic drawing element for representing graphical information
Cover Flow – an animated, three-dimensional element to visually flipping through snapshots of documents, website bookmarks, album artwork, or photographs.
Bubble Flow – an animated, two-dimensional element that allows users to browse and interact the entire tree view of a discussion thread.
Carousel (computing) – a graphical widget used to display visual cards in a way that's quick for users to browse, both on websites and on mobile apps
== See also ==
Graphical user interface elements
Geometric primitive
Widget engine for mostly unrelated, physically inspired "widgets"
Widget toolkit – a software library which contains a collection of widgets
Interaction technique
== References ==
== External links ==
Packaged Web Apps (Widgets) - Packaging and XML Configuration (Second Edition) - W3C Recommendation 27 November 2012
Widgets 1.0: The Widget Landscape (Q1 2008). W3C Working Draft 14 April 2008
Requirement For Standardizing Widgets. W3C Working Group Note 27 September 2011 | Wikipedia/Graphical_control_element |
Health systems science (HSS) is a foundational platform and framework for the study and understanding of how care is delivered, how health professionals work together to deliver that care, and how the health system can improve patient care and health care delivery. It is one of the three pillars of medical education along with the basic and clinical sciences. HSS includes the following core foundational domains: health care structure and process; health system improvement; value in health care; population, public, and social determinants of health; clinical informatics and health technology; and health care policy and economics. It also includes four functional domains: ethics and legal; change agency, management, and advocacy; teaming; and leadership. Systems thinking links all of these domains together. Patient, family, and community are at the center of HSS.
== History and development ==
HSS, which was originally referred to as systems-based practice, emerged in response to the growing recognition that effective health care delivery requires more than just clinical expertise. It acknowledges that health care systems are complex, adaptive systems influenced by a multitude of factors, including social determinants of health, policy decisions, organizational structures, and patient preferences.
The World Health Organization first recognized the need to educate physicians about the link between health and the systems in which people live, work, and play in 1978. The quality and patient safety movement of the 1980s and 1990s further reinforced the need for physicians to understand systems thinking. The Association of American Medical Colleges' Core Entrustable Professional Activities for Entering Residency (CEPAERs) started including identifying system failures and making contributions to a culture of safety and improvement in 1999. That year, the Accreditation Council for Graduate Medical Education also included systems-based practice as one of its six core competency domains. In 2001, the Health Resources and Services Administration funded an 18-medical-school consortium to launch several pilots related to systems-based education. In 2005, the book, Professionalism in Tomorrow's Healthcare System, outlined several aspects of the systems-based practice competency.
Medical schools and residency and fellowship programs, however, struggled to teach these competencies. The framework for HSS was developed to address this struggle and is built on a foundation of systems thinking and the biopsychosocial model developed by George L. Engel. It aims to educate physicians to become systems citizens.
From 2013 to 2015 the American Medical Association's (AMA) Accelerating Change in Medical Education Consortium of 11 U.S. medical schools worked to identify a comprehensive framework for HSS training. In 2017, a review of 30 grant submissions to the AMA Accelerating Change in Medical Education initiative and an analysis of the HSS-related curricula at the 11 medical schools that were members of the Accelerating Change in Medical Education Consortium formed the groundwork toward the development of a potential comprehensive HSS curricular framework with domains and subcategories.
Barriers to incorporating HSS into medical education include student resistance because it is not always viewed as essential to passing physician licensing and credentialing exams and limitations in the number of medical school faculty with expertise to teach HSS domains.
An increasing number of new medical schools have created their initial curriculum with HSS fully integrated including Kaiser Permanente Bernard J. Tyson School of Medicine, which matriculated its first class in July 2020, and the Alice L. Walton School of Medicine, which is expected to matriculate its first class in 2025.
== Future directions ==
As health care continues to evolve, the importance of HSS is expected to grow. Efforts to integrate HSS into medical education and practice will be essential for preparing physicians to navigate the complexities of modern health care delivery, advocate for their patients, and contribute to improving the health of populations.
HSS is also expanding to other health professions. In 2023, the National Academies of Sciences, Engineering, and Medicine hosted a series of workshops focused on integrating HSS across the learning curriculum. HSS has been expanding to physician assistants, nurses, and other health care professionals.
== Health systems science in Korea ==
The Korean Association of Medical Colleges has proposed replacing medical humanities with health systems science in that country's medical education system, although critics say that it needs adaptation to the Korean health system.
== Health systems science in South Africa ==
The American Medical Association collaborated with the University of Witwatersrand to customize health systems science for the South African health system.
== Health systems science in the United Kingdom ==
Health systems science is also referred to as clinical governance in the United Kingdom, although this does not include all the domains included in the American HSS framework.
== Notable figures and organizations ==
Susan Skochelak, MD, MPH, creator of the Accelerating Change in Medical Education initiative and lead editor of the first and second editions of the Health Systems Science textbook.
Schools involved in the AMA Accelerating Change in Medical Education initiative that helped create the HSS framework:
Warren Alpert Medical School of Brown University
Brody School of Medicine at East Carolina University
University of California, San Francisco, School of Medicine
University of California, Davis, School of Medicine
Indiana University School of Medicine
Mayo Clinic Alix School of Medicine
University of Michigan Medical School
New York University Grossman School of Medicine
Oregon Health & Science University School of Medicine
Penn State University College of Medicine
Vanderbilt University School of Medicine
== See also ==
Health systems engineering
Medical humanities
== References == | Wikipedia/Health_systems_science |
Regulatory science is the scientific and technical foundations upon which regulations are based in various industries – particularly those involving health or safety. Regulatory bodies employing such principles in the United States include, for example, the FDA for food and medical products, the EPA for the environment, and the OSHA for work safety.
"Regulatory science" is contrasted with regulatory affairs and regulatory law, which refer to the administrative or legal aspects of regulation, in that the former is focused on the regulations' scientific underpinnings and concerns – rather than the regulations' promulgation, implementation, compliance, or enforcement.
== History ==
Probably the first investigator who recognized the nature of regulatory science was Alvin Weinberg, who described the scientific process used to evaluate effects of ionizing radiation as trans science. The origin of the term regulatory science is unknown. It was probably coined sometimes in the late 1970s in an undated memorandum prepared by A. Alan Moghissi, who was describing scientific issues that the newly formed US Environmental Protection Agency (EPA) was facing . During that period the EPA was forced to meet legally mandated deadlines to make decisions that would require reliance upon science that was not meeting conventional scientific requirements. At that time the prevailing view was that there was no need to establish a new scientific discipline because "science is science" regardless of its application. In the spring of 1985, Moghissi established the Institute for Regulatory Science in the commonwealth of Virginia as a nonprofit organization with the objective to perform scientific studies "at the interface between science and the regulatory system". Moghissi et al. have provided an extensive description of history of regulatory science including various perception of regulatory science leading to the acceptance of regulatory science by the FDA.
== Definition ==
Two federal regulatory agencies have provided definitions for regulatory science. According to Food and Drug Administration: “Regulatory Science is the science of developing new tools, standards, and approaches to assess the safety, efficacy, quality, and performance of all FDA-regulated products”. According to Environmental Protection Agency (EPA): “Regulatory science means scientific information including assessments, models, criteria documents, and regulatory impact analyses that provide the basis for significant regulatory decisions”.
Moghissi et al. have described the history of regulatory science and define it as:
“Regulatory science consists of an applied version of various scientific disciplines used in the regulatory process”. Based on their definition the generalized FDA definition is: Regulatory science is the science of developing new tools, standards, and approaches derived from various scientific disciplines to assess the safety, efficacy, quality, and performance of all FDA-regulated products. Similarly, the generalized EPA definition is:
Regulatory science means scientific information including assessments, models, criteria documents, and regulatory impact analyses derived from various scientific disciplines that provide the basis for EPA final significant regulatory decisions.
There have been several attempts to define regulatory science. In many cases there are claims that there is a difference between regulatory science and “normal science”, “academic science”, “research science”. or compliance with regulations. The primary problem is the lack of appreciation that many branches of science are evolving and much of the evolving science includes inherent uncertainties.
== Application of regulatory science ==
Regulatory science is included in every regulation that includes science. The regulatory science community consists of three groups of regulatory scientists:
Those who are involved in development of regulations. Typically this group is employed by regulatory agencies
Those who must comply with regulations. Typically this group consists of employees or contractors of regulated community.
Those segments of the scientific community who perform research and development in areas relevant to the relevant regulated community.
The third group is of particular significance as they consist of organizations and individuals who support the first two groups. Included in this group are members of numerous advisory panels, organizations that provide peer reviews, and members of peer review panels. An example of this group is the National Academies consisting of the National Academy of Science, National Academy of Engineering, Institute of Medicine, and National Research Council.
The application of regulatory science occurs in three phases. During the first phase the regulators must meet a legislative or court- mandated deadline and promulgate regulations using their best judgment. The second phase provides opportunity to develop regulatory science tools. These include human health and ecological risk assessment procedures and post marketing evaluation method d processes for drugs ad medical devices. The third Standard Operating phase, used tools developed during the second phase to improve the initial decision. (Moghissi et al.)
== Regulatory engineering ==
Engineering is the development of new products and processes, hence regulatory engineering encompasses principally the development of products and processes to facilitate or better examine regulations or their scientific foundations. Another related segment of regulatory science deal with the application of engineering design or analysis to operations such as the safety of nuclear and other power plants, chemical production facilities, mining operations, and air transportation.
Sometimes the term "regulatory engineer(ing)" is misused to refer to essentially administrative or regulatory roles dealing with organizing or coordinating regulatory matters for an organization; however, "engineering" refers only to functional design of products and processes, and in many jurisdictions this definition is legally enforced (see Regulation and licensure in engineering).
== Areas of focus ==
=== Regulatory pharmaceutical medicine ===
Consistent with its mission, the Food and Drug Administration (FDA) suggests, “Regulatory science is the science of developing new tools, standards and approaches to assess the safety, efficacy, quality and performance of FDA-regulated products.”
Based on several decades of experience regulatory science is logically defined as a distinct scientific discipline constituting the scientific foundation of regulatory, legislative, and judicial decisions. Much like many scientific disciplines that have evolved within the last several decades, regulatory science is both interdisciplinary and multidisciplinary and relies upon a large number of basic and applied scientific disciplines.
Regulatory science is an emerging area of interest within pharmaceutical medicine as the shaping and implementation of legislation and guidelines. One definition of “regulatory science” is the science of developing new tools, standards and approaches to evaluate the efficacy, safety, quality and performance of medical products in order to assess benefit-risk and facilitate a sound and transparent regulatory decision-making. It has been recognized as having a significant impact on the industry’s ability to bring new medicines and medical devices to patients in need. Regulatory science challenges current concepts of benefit/risk assessment, submission and approval strategies, patient’s involvement and ethical aspects. It creates the platform for launching new ideas – not only by the pharmaceutical industry and regulatory authorities, but also by, for example, academia, who wants to contribute to better use of their research activities within medical aspects. Regulatory science has the potential as an enabler for directing companies towards more efficient global development of medical products as well as more robust quality decision-making processes.
=== Human health ===
By far the predominant foci of regulatory science pertain to human health and well-being. This realm covers a broad range of scientific areas – including pollution and toxicology, work safety, food, drugs, and numerous others.
=== Ecology ===
Regulatory ecology covers the protection of various species, protection of wetlands, and numerous other regulated areas, including ecotoxicology.
For example, the US Clean Water Act is based upon an interest in protecting water quality for its own sake, in contrast with the Clean Air Act which is premised upon protecting air quality only for the sake of human health; however, these are ideological policy premises rather than scientific matters themselves.
The US Department of Agriculture regulates animal care, and the FDA regulates humaneness for animal studies.
The US Department of Interior, Fish and Wildlife Service (USFWS), and National Oceanographic and Atmospheric Administration, National Marine Fisheries Service, implement the development and enforcement of policies required by the Federal Endangered Species Act (FESA), Migratory Bird Treaty Act, and other biological resources laws. The FESA requires that the decisions to list a species as endangered or threatened are based on the best available scientific data. To that end, the USFWS and other government agencies fund research to determine the conservation status of proposed species. Regulatory scientists within the Services review, evaluate, and incorporate data from these studies of proposed species in their published regulations. Survey protocols for listed species are also developed from scientific studies of their target species. The purpose of the protocols is to reliably and accurately determine the residency of the target species in a given study area.
== Regulatory economics ==
There are numerous economic decisions in the regulatory process, including the economics part of cost-benefit analysis.
== Science in legislation and in courts ==
Although often less than fully recognized, the scientific foundation of legislative decisions is included in regulatory science and should be based on reliable science. Similarly, courts have recognized the need to rely upon information that meets scientific requirements.
== References ==
== Further reading ==
Honda, Hiroshi (2016). "Overview of Issues and Discussions in Regulatory Science and Engineering over the Past Four Years in Global Arena" (PDF). American Journal of Environmental Engineering and Science. 3 (1): 1–20. ISSN 2381-1153. | Wikipedia/Regulatory_science |
A complex adaptive system (CAS) is a system that is complex in that it is a dynamic network of interactions, but the behavior of the ensemble may not be predictable according to the behavior of the components. It is adaptive in that the individual and collective behavior mutate and self-organize corresponding to the change-initiating micro-event or collection of events. It is a "complex macroscopic collection" of relatively "similar and partially connected micro-structures" formed in order to adapt to the changing environment and increase their survivability as a macro-structure. The Complex Adaptive Systems approach builds on replicator dynamics.
The study of complex adaptive systems, a subset of nonlinear dynamical systems, is an interdisciplinary matter that attempts to blend insights from the natural and social sciences to develop system-level models and insights that allow for heterogeneous agents, phase transition, and emergent behavior.
== Overview ==
The term complex adaptive systems, or complexity science, is often used to describe the loosely organized academic field that has grown up around the study of such systems. Complexity science is not a single theory—it encompasses more than one theoretical framework and is interdisciplinary, seeking the answers to some fundamental questions about living, adaptable, changeable systems. Complex adaptive systems may adopt hard or softer approaches. Hard theories use formal language that is precise, tend to see agents as having tangible properties, and usually see objects in a behavioral system that can be manipulated in some way. Softer theories use natural language and narratives that may be imprecise, and agents are subjects having both tangible and intangible properties. Examples of hard complexity theories include complex adaptive systems (CAS) and viability theory, and a class of softer theory is Viable System Theory. Many of the propositional consideration made in hard theory are also of relevance to softer theory. From here on, interest will now center on CAS.
The study of CAS focuses on complex, emergent and macroscopic properties of the system. John H. Holland said that CAS "are systems that have a large numbers of components, often called agents, that interact and adapt or learn."
Typical examples of complex adaptive systems include: climate; cities; firms; markets; governments; industries; ecosystems; social networks; power grids; animal swarms; traffic flows; social insect (e.g. ant) colonies; the brain and the immune system; and the cell and the developing embryo. Human social group-based endeavors, such as political parties, communities, geopolitical organizations, war, and terrorist networks are also considered CAS. The internet and cyberspace—composed, collaborated, and managed by a complex mix of human–computer interactions, is also regarded as a complex adaptive system. CAS can be hierarchical, but more often exhibit aspects of "self-organization".
The term complex adaptive system was coined in 1968 by sociologist Walter F. Buckley who proposed a model of cultural evolution which regards psychological and socio-cultural systems as analogous with biological species. In the modern context, complex adaptive system is sometimes linked to memetics, or proposed as a reformulation of memetics. Michael D. Cohen and Robert Axelrod however argue the approach is not social Darwinism or sociobiology because, even though the concepts of variation, interaction and selection can be applied to modelling 'populations of business strategies', for example, the detailed evolutionary mechanisms are often distinctly unbiological. As such, complex adaptive system is more similar to Richard Dawkins's idea of replicators.
=== General properties ===
What distinguishes a complex adaptive system (CAS) from a pure multi-agent system (MAS) is the focus on top-level properties and features like self-similarity, complexity, emergence and self-organization. Theorists define an MAS as a system composed of multiple interacting agents; whereas in CAS, the agents as well as the system are adaptive and the system is self-similar. A CAS is a complex, self-similar collectivity of interacting, adaptive agents. Complex adaptive systems feature a high degree of adaptive capacity, giving them resilience in the face of perturbation.
Other important properties include adaptation (or homeostasis), communication, cooperation, specialization, spatial and temporal organization, and reproduction. Such properties can manifest themselves on all levels: cells specialize, adapt and reproduce themselves just like larger organisms do. Communication and cooperation take place on all levels, from the agent- to the system-level. In some cases the forces driving co-operation between agents in such a system can be analyzed using game theory.
=== Characteristics ===
Some of the most important characteristics of complex adaptive systems are:
The number of elements is sufficiently large that conventional descriptions (e.g. a system of differential equations) are not only impractical, but cease to assist in understanding the system. Moreover, the elements interact dynamically, and the interactions can be physical or involve the exchange of information.
Such interactions are rich, i.e. any element or sub-system in the system is affected by and affects several other elements or sub-systems.
The interactions are non-linear: small changes in inputs, physical interactions or stimuli can cause large effects or very significant changes in outputs.
Interactions are primarily but not exclusively with immediate neighbours and the nature of the influence is modulated.
Any interaction can feed back onto itself directly or after a number of intervening stages. Such feedback can vary in quality. This is known as recurrency.
The overall behavior of the system of elements is not predicted by the behavior of the individual elements
Such systems may be open and it may be difficult or impossible to define system boundaries
Complex systems operate under far from equilibrium conditions. There has to be a constant flow of energy to maintain the organization of the system
Agents in the system are adaptive. They update their strategies in response to input from other agents, and the system itself.
Elements in the system may be ignorant of the behaviour of the system as a whole, responding only to the information or physical stimuli available to them locally
Robert Axelrod & Michael D. Cohen identify a series of key terms from a modeling perspective:
Strategy, a conditional action pattern that indicates what to do in which circumstances
Artifact, a material resource that has definite location and can respond to the action of agents
Agent, a collection of properties, strategies & capabilities for interacting with artifacts & other agents
Population, a collection of agents, or, in some situations, collections of strategies
System, a larger collection, including one or more populations of agents and possibly also artifacts
Type, all the agents (or strategies) in a population that have some characteristic in common
Variety, the diversity of types within a population or system
Interaction pattern, the recurring regularities of contact among types within a system
Space (physical), location in geographical space & time of agents and artifacts
Space (conceptual), "location" in a set of categories structured so that "nearby" agents will tend to interact
Selection, processes that lead to an increase or decrease in the frequency of various types of agent or strategies
Success criteria or performance measures, a "score" used by an agent or designer in attributing credit in the selection of relatively successful (or unsuccessful) strategies or agents
Turner and Baker synthesized the characteristics of complex adaptive systems from the literature and tested these characteristics in the context of creativity and innovation. Each of these eight characteristics had been shown to be present in the creativity and innovative processes:
Path dependent: Systems tend to be sensitive to their initial conditions. The same force might affect systems differently.
Systems have a history: The future behavior of a system depends on its initial starting point and subsequent history.
Non-linearity: React disproportionately to environmental perturbations. Outcomes differ from those of simple systems.
Emergence: Each system's internal dynamics affect its ability to change in a manner that might be quite different from other systems.
Irreducible: Irreversible process transformations cannot be reduced back to its original state.
Adaptive/Adaptability: Systems that are simultaneously ordered and disordered are more adaptable and resilient.
Operates between order and chaos: Adaptive tension emerges from the energy differential between the system and its environment.
Self-organizing: Systems are composed of interdependency, interactions of its parts, and diversity in the system.
== Adaptation mechanisms ==
The organisation of a complex adaptive system rely on the use of internal models, mental models or schemas guiding the behaviors of the system. We can distinguish three levels of adaptation of a system:
Using a schema to react to changing circumstances in the environment.
Changing a schema when the existing one does not lead to satisfactory outcomes.
Selecting the systems using successfull schemata among a population (survival of the fittest).
== Modeling and simulation ==
CAS are occasionally modeled by means of agent-based models and complex network-based models. Agent-based models are developed by means of various methods and tools primarily by means of first identifying the different agents inside the model. Another method of developing models for CAS involves developing complex network models by means of using interaction data of various CAS components.
In 2013 SpringerOpen/BioMed Central launched an online open-access journal on the topic of complex adaptive systems modeling (CASM). Publication of the journal ceased in 2020.
== Evolution of complexity ==
Living organisms are complex adaptive systems. Although complexity is hard to quantify in biology, evolution has produced some remarkably complex organisms. This observation has led to the common misconception of evolution being progressive and leading towards what are viewed as "higher organisms".
If this were generally true, evolution would possess an active trend towards complexity. As shown below, in this type of process the value of the most common amount of complexity would increase over time. Indeed, some artificial life simulations have suggested that the generation of CAS is an inescapable feature of evolution.
However, the idea of a general trend towards complexity in evolution can also be explained through a passive process. This involves an increase in variance but the most common value, the mode, does not change. Thus, the maximum level of complexity increases over time, but only as an indirect product of there being more organisms in total. This type of random process is also called a bounded random walk.
In this hypothesis, the apparent trend towards more complex organisms is an illusion resulting from concentrating on the small number of large, very complex organisms that inhabit the right-hand tail of the complexity distribution and ignoring simpler and much more common organisms. This passive model emphasizes that the overwhelming majority of species are microscopic prokaryotes, which comprise about half the world's biomass and constitute the vast majority of Earth's biodiversity. Therefore, simple life remains dominant on Earth, and complex life appears more diverse only because of sampling bias.
If there is a lack of an overall trend towards complexity in biology, this would not preclude the existence of forces driving systems towards complexity in a subset of cases. These minor trends would be balanced by other evolutionary pressures that drive systems towards less complex states.
== See also ==
== References ==
== Literature ==
== External links ==
Complex Adaptive Systems Group loosely coupled group of scientists and software engineers interested in complex adaptive systems
DNA Wales Research Group Current Research in Organisational change CAS/CES related news and free research data. Also linked to the Business Doctor & BBC documentary series
A description of complex adaptive systems on the Principia Cybernetica Web.
Quick reference single-page description of the 'world' of complexity and related ideas hosted by the Center for the Study of Complex Systems at the University of Michigan.
Complex systems research network
The Open Agent-Based Modeling Consortium
TEDxRotterdam – Igor Nikolic – Complex adaptive systems, and The emergence of universal consciousness: Brendan Hughes at TEDxPretoria . Talks discussing various practical examples of complex adaptive systems, including Wikipedia, star galaxies, genetic mutation, and other examples | Wikipedia/Complex_adaptive_systems |
Theory U is a change management method and the title of a book by Otto Scharmer. Scharmer with colleagues at MIT conducted 150 interviews with entrepreneurs and innovators in science, business, and society and then extended the basic principles into a theory of learning and management, which he calls Theory U. The principles of Theory U are suggested to help political leaders, civil servants, and managers break through past unproductive patterns of behavior that prevent them from empathizing with their clients' perspectives and often lock them into ineffective patterns of decision-making.
== Some notes about theory U ==
=== Fields of attention ===
Thinking (individual)
Conversing (group)
Structuring (institutions)
Ecosystem coordination (global systems)
=== Presencing ===
The author of the theory U concept expresses it as a process or journey, which is also described as Presencing, as indicated in the diagram (for which there are numerous variants).
At the core of the "U" theory is presencing: sensing + presence. According to The Learning Exchange, Presencing is a journey with five movements:
We move down one side of the U (connecting us to the world that is outside of our institutional bubble) to the bottom of the U (connecting us to the world that emerges from within) and up the other side of the U (bringing forth the new into the world).
On that journey, at the bottom of the U, lies an inner gate that requires us to drop everything that isn't essential. This process of letting-go (of our old ego and self) and letting-come (our highest future possibility: our Self) establishes a subtle connection to a deeper source of knowing. The essence of presencing is that these two selves – our current self and our best future self – meet at the bottom of the U and begin to listen and resonate with each other. Once a group crosses this threshold, nothing remains the same. Individual members and the group as a whole begin to operate with a heightened level of energy and sense of future possibility. Often they then begin to function as an intentional vehicle for an emerging future.
The core elements are shown below.
"Moving down the left side of the U is about opening up and dealing with the resistance of thought, emotion, and will; moving up the right side is about intentionally reintegrating the intelligence of the head, the heart, and the hand in the context of practical applications".
=== Leadership capacities ===
According to Scharmer, a value created by journeying through the "U" is to develop seven essential leadership capacities:
Holding the space: listen to what life calls you to do (listen to oneself, to others and make sure that there is space where people can talk)
Observing: Attend with your mind wide open (observe without your voice of judgment, effectively suspending past cognitive schema)
Sensing: Connect with your heart and facilitate the opening process (i.e. see things as interconnected wholes)
Presencing: Connect to the deepest source of your self and will and act from the emerging whole
Crystallizing: Access the power of intention (ensure a small group of key people commits itself to the purpose and outcomes of the project)
Prototyping: Integrating head, heart, and hand (one should act and learn by doing, avoiding the paralysis of inaction, reactive action, over-analysis, etc.)
Performing: Playing the "macro violin" (i.e. find the right leaders, find appropriate social technology to get a multi-stakeholder project going).
The sources of Theory U include interviews with 150 innovators and thought leaders on management and change. Particularly the work of Brian Arthur, Francisco Varela, Peter Senge, Ed Schein, Joseph Jaworski, Arawana Hayashi, Eleanor Rosch, Friedrich Glasl, Martin Buber, Rudolf Steiner and Johann Wolfgang von Goethe have been critical. Artists are represented in the project from 2001 -2010 by Andrew Campbell, whose work was given a separate index page linked to the original project site. https://web.archive.org/web/20050404033150/http://www.dialogonleadership.org/indexPaintings.html
Today, Theory U constitutes a body of leadership and management praxis drawing from a variety of sources and more than 20 years of elaboration by Scharmer and colleagues. Theory U is translated into 20 languages and is used in change processes worldwide.
Meditation teacher Arawana Hayashi has explained how she considers Theory U relevant to "the feminine principle".
== Earlier work: U-procedure ==
The earlier work by Glasl involved a sociotechnical, Goethean and anthroposophical process involving a few or many co-workers, managers and/or policymakers. It proceeded from phenomenological diagnosis of the present state of the organisation to plans for the future. They described a process in a U formation consisting of three levels (technical and instrumental subsystem, social subsystem and cultural subsystem) and seven stages beginning with the observation of organisational phenomena, workflows, resources etc., and concluding with specific decisions about desired future processes and phenomena. The method draws on the Goethean techniques described by Rudolf Steiner, transforming observations into intuitions and judgements about the present state of the organisation and decisions about the future. The three stages represent explicitly recursive reappraisals at progressively advanced levels of reflective, creative and intuitive insight and (epistemologies), thereby enabling more radically systemic intervention and redesign. The stages are: phenomena – picture (a qualitative metaphoric visual representation) – idea (the organising idea or formative principle) – and judgement (does this fit?). The first three then are reflexively replaced by better alternatives (new idea --> new image --> new phenomena) to form the design design. Glasl published the method in Dutch (1975), German (1975, 1994) and English (1997).
The seven stages are shown below.
In contrast to that earlier work on the U procedure, which assumes a set of three subsystems in the organization that need to be analyzed in a specific sequence, Theory U starts from a different epistemological view that is grounded in Varela's approach to neurophenomenology. It focuses on the process of becoming aware and applies to all levels of systems change. Theory U contributed to advancing organizational learning and systems thinking tools towards an awareness-based view of systems change that blends systems thinking with systems sensing. On the left-hand side of the U the process is going through the three main "gestures" of becoming aware that Francisco Varela spelled out in his work (suspension, redirection, letting-go). On the right-hand side of the U this process extends towards actualizing the future that is wanting to emerge (letting come, enacting, embodying).
== Criticism ==
Sociologist Stefan Kühl criticizes Theory U as a management fashion on three main points: First of all, while Theory U posits to create change on all levels, including the level of the individual "self" and the institutional level, case studies mainly focus on clarifying the positions of individuals in groups or teams. Except of the idea of participating in online courses on Theory U, the theory remains silent on how broad organisational or societal changes may take place. Secondly, Theory U, like many management fashions, neglects structural conflicts of interest, for instance between groups, organisations and class. While it makes sense for top management to emphasize common values, visions and the community of all staff externally, Kühl believes this to be problematic if organisations internally believe too strongly in this community, as this may prevent the articulation of conflicting interests and therefore organisational learning processes. Finally, the 5 phase model of Theory U, like other cyclical (but less esoteric) management models, such as PDCA, are a gross simplification of decision-making processes in organisation that are often wilder, less structured and more complex. Kühl argues that Theory U may be useful as it allows management to make decisions despite unsure knowledge and encourages change, but expects that Theory U will lose its glamour.
== See also ==
Appreciative inquiry
Art of Hosting
Decision cycle
Learning cycle
OODA loop
V-Model
== References ==
== External links ==
C. Otto Scharmer Home Page
Presencing Home Page
The U-Process for Discovery | Wikipedia/Theory_U |
The process of establishing documentary evidence demonstrating that a procedure, process, or activity carried out in testing and then production maintains the desired level of compliance at all stages. In the pharmaceutical industry, it is very important that in addition to final testing and compliance of products, it is also assured that the process will consistently produce the expected results. The desired results are established in terms of specifications for outcome of the process. Qualification of systems and equipment is therefore a part of the process of validation. Validation is a requirement of food, drug and pharmaceutical regulating agencies such as the US FDA and their good manufacturing practices guidelines. Since a wide variety of procedures, processes, and activities need to be validated, the field of validation is divided into a number of subsections including the following:
Equipment validation
Facilities validation
HVAC system validation
Cleaning validation
Process Validation
Analytical method validation
Computer system validation
Similarly, the activity of qualifying systems and equipment is divided into a number of subsections including the following:
Design qualification (DQ)
Component qualification (CQ)
Installation qualification (IQ)
Operational qualification (OQ)
Performance qualification (PQ)
== History ==
The concept of validation was first proposed by two Food and Drug Administration (FDA) officials, Ted Byers and Bud Loftus, in 1979 in USA, to improve the quality of pharmaceuticals. It was proposed in direct response to several problems in the sterility of large volume parenteral market. The first validation activities were focused on the processes involved in making these products, but quickly spread to associated processes including environmental control, media fill, equipment sanitization and purified water production.
The concept of validation was first developed for equipment and processes and derived from the engineering practices used in delivery of large pieces of equipment that would be manufactured, tested, delivered and accepted according to a contract
The use of validation spread to other areas of industry after several large-scale problems highlighted the potential risks in the design of products. The most notable is the Therac-25 incident. Here, the software for a large radiotherapy device was poorly designed and tested. In use, several interconnected problems led to several devices giving doses of radiation several thousands of times higher than intended, which resulted in the death of three patients and several more being permanently injured.
In 2005 an individual wrote a standard by which the transportation process could be validated for cold chain products. This standard was written for a biological manufacturing company and was then written into the PDA's Technical Report # 39,thus establishing the industry standard for cold chain validation. This was critical for the industry due to the sensitivity of drug substances, biologics and vaccines to various temperature conditions. The FDA has also been very focused on this final area of distribution and the potential for a drug substances quality to be impacted by extreme temperature exposure.
4.6. Accuracy:
Accuracy of an analytical procedure is the closeness of test results obtained by that procedure
to the true value. The accuracy of an analytical procedure shall be established across its range.
4.7. Precision:
The precision of an analytical procedure expresses the closeness of agreement between a series
of measurements obtained from multiple sampling of the same homogeneous sample under the
prescribed conditions.
4.8. Method precision (Repeatability):
Method precision carried out on different test preparation of a homogenous sample within short
interval of time under same experimental conditions.
4.9. Intermediate precision (Ruggedness):
Intermediate precision (Ruggedness) expresses within-laboratories variations i.e. different
days, different analysts, different equipment etc.
4.10. Range:
The range of an analytical procedure is the interval between the upper and lower concentration
of analyte in the sample for which it has been demonstrated that the analytical procedure has a
suitable level of precision, accuracy and linearity
== Reasons for validation ==
FDA, or any other food and drugs regulatory agency around the globe not only ask for a product that meets its specification but also require a process, procedures, intermediate stages of inspections, and testing adopted during manufacturing are designed such that when they are adopted they produce consistently similar, reproducible, desired results which meet the quality standard of product being manufactured and complies the Regulatory and Security Aspects. Such procedures are developed through the process of validation. This is to maintain and assure a higher degree of quality of food and drug products.
"Process validation is defined as the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product. Process validation involves a series of activities taking place over the lifecycle of the product and process.". A properly designed system will provide a high degree of assurance that every step, process, and change has been properly evaluated before its implementation. Testing a sample of a final product is not considered sufficient evidence that every product within a batch meets the required specification.
== Validation Master Plan ==
The Validation Master Plan is a document that describes how and when the validation program will be executed in a facility. Even though it is not mandatory, it is the document that outlines the principles involved in the qualification of a facility, defines the areas and systems to be validated and provides a written program for achieving and maintaining a qualified facility with validated processes. It is the foundation for the validation program and should include process validation, facility and utility qualification and validation, equipment qualification, cleaning and computer validation. The regulations also set out an expectation that the different parts of the production process are well defined and controlled, such that the results of that production will not substantially change over time.
== The validation process ==
The validation scope, boundaries and responsibilities for each process or groups of similar processes or similar equipment's must be documented and approved in a validation plan. These documents, terms and references for the protocol authors are for use in setting the scope of their protocols. It must be based on a Validation Risk Assessment (VRA) to ensure that the scope of validation being authorised is appropriate for the complexity and importance of the equipment or process under validation. Within the references given in the VP the protocol authors must ensure that all aspects of the process or equipment under qualification; that may affect the efficacy, quality and or records of the product are properly qualified. Qualification includes the following steps:
Design qualification (DQ)- Demonstrates that the proposed design (or the existing design for an off-the-shelf item) will satisfy all the requirements that are defined and detailed in the User Requirements Specification (URS). Satisfactory execution of the DQ is a mandatory requirement before construction (or procurement) of the new design can be authorised.
Installation qualification (IQ) – Demonstrates that the process or equipment meets all specifications, is installed correctly, and all required components and documentation needed for continued operation are installed and in place.
Operational qualification (OQ) – Demonstrates that all facets of the process or equipment are operating correctly.
Performance qualification (PQ) – Demonstrates that the process or equipment performs as intended in a consistent manner over time.
Component qualification (CQ) – is a relatively new term developed in 2005. This term refers to the manufacturing of auxiliary components to ensure that they are manufactured to the correct design criteria. This could include packaging components such as folding cartons, shipping cases, labels or even phase change material. All of these components must have some type of random inspection to ensure that the third party manufacturer's process is consistently producing components that are used in the world of GMP at drug or biologic manufacturer.
There are instances when it is more expedient and efficient to transfer some tests or inspections from the IQ to the OQ, or from the OQ to the PQ. This is allowed for in the regulations, provided that a clear and approved justification is documented in the Validation Plan (VP).
This combined testing of OQ and PQ phases is sanctioned by the European Commission Enterprise Directorate-General within ‘Annex 15 to the EU Guide to Good Manufacturing Practice guide’ (2001, p. 6) which states that:
"Although PQ is described as a separate activity, it may in some cases be appropriate to perform it in conjunction with OQ."
== Computer System Validation ==
This requirement has naturally expanded to encompass computer systems used both in the development and production of, and as a part of pharmaceutical products, medical devices, food, blood establishments, tissue establishments, and clinical trials. In 1983 the FDA published a guide to the inspection of Computerized Systems in Pharmaceutical Processing, also known as the 'bluebook'. Recently both the American FDA and the UK Medicines and Healthcare products Regulatory Agency have added sections to the regulations specifically for the use of computer systems. In the UK, computer validation is covered in Annex 11 of the EU GMP regulations (EMEA 2011). The FDA introduced 21 CFR Part 11 for rules on the use of electronic records, electronic signatures (FDA 1997).
The FDA regulation is harmonized with ISO 8402:1994, which treats "verification" and "validation" as separate and distinct terms. On the other hand, many software engineering journal articles and textbooks use the terms "verification" and "validation" interchangeably, or in some cases refer to software "verification, validation, and testing (VV&T)" as if it is a single concept, with no distinction among the three terms.
The General Principles of Software Validation (FDA 2002) defines verification as
"Software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase."
It also defines Validation as
"Confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled". The software validation guideline states: “The software development process should be sufficiently well planned, controlled, and documented to detect and correct unexpected results from software changes." Annex 11 states "The validation documentation and reports should cover the relevant steps of the life
cycle."
Weichel (2004) recently found that over twenty warning letters issued by the FDA to pharmaceutical companies specifically cited problems in Computer System Validation between 1997 and 2001.
Probably the best known industry guidance available is the GAMP Guide, now in its fifth edition and known as GAMP5 published by ISPE (2008). This guidance gives practical advice on how to satisfy regulatory requirements.
=== Scope of Computer Validation ===
The definition of validation above discusses production of evidence that a system will meet its specification. This definition does not refer to a computer application or a computer system but to a process. The main implications in this are that validation should cover all aspects of the process including the application, any hardware that the application uses, any interfaces to other systems, the users, training and documentation as well as the management of the system and the validation itself after the system is put into use. The PIC/S guideline (PIC/S 2004) defines this as a 'computer related system'.
Much effort is expended within the industry upon validation activities, and several journals are dedicated to both the process and methodology around validation, and the science behind it.
=== Risk Based Approach To Computer Validation ===
In the recent years, a risk-based approach has been adopted within the industry, where the testing of computer systems (emphasis on finding problems) is wide-ranging and documented but not heavily evidenced (i.e. hundreds of screen prints are not gathered during testing). Annex 11 states "Risk management should be applied throughout the lifecycle of the computerised system taking into account patient safety, data integrity and product quality. As part of a risk management system, decisions on the extent of validation and data integrity controls should be based on a justified and documented risk assessment of the computerised system."
The subsequent validation or verification of computer systems targets only the "GxP critical" requirements of computer systems. Evidence (e.g. screen prints) is gathered to document the validation exercise. In this way it is assured that systems are thoroughly tested, and that validation and documentation of the "GxP critical" aspects is performed in a risk-based manner, optimizing effort and ensuring that computer system's fitness for purpose is demonstrated.
The overall risk posed by a computer system is now generally considered to be a function of system complexity, patient/product impact, and pedigree (Configurable-Off-The-Shelf or Custom-written for a certain purpose). A lower risk system should merit a less in-depth specification/testing/validation approach. (e.g. The documentation surrounding a spreadsheet containing a simple but "GxP" critical calculation should not match that of a Chromatography Data System with 20 Instruments)
Determination of a "GxP critical" requirement for a computer system is subjective, and the definition needs to be tailored to the organisation involved. However, in general a "GxP" requirement may be considered to be a requirement which leads to the development/configuration of a computer function which has a direct impact on patient safety,
the pharmaceutical product being processed, or has been developed/configured to meet a regulatory requirement. In addition if a function has a direct impact on GxP data (security or integrity) it may be considered "GxP critical".
== Product life cycle approach in validation ==
Validation process efforts must account for the complete product life cycle, including developmental procedures adapted for qualification of a drug product commencing with its research and development phase, rationale for adapting a best fit formula which represents the relationship between required outputs and specified inputs, and procedure for manufacturing. Each step is required to be justified and monitored in order to provide a good quality food and drug product. The FDA emphasizes the product life cycle approach in its evaluation of manufacturer regulatory compliance as well.
== See also ==
Good Automated Manufacturing Practice (GAMP)
Verification and Validation
Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme
Regulation of therapeutic goods
United States Pharmacopeia
== References ==
Bibliography
Guidance for Industry. Process Validation: General Principles and Practices. U.S. Department of Health and Human Services Food and Drug Administration. January 2011. | Wikipedia/Validation_(drug_manufacture) |
In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
== Overview ==
A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize.
SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations.
In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004), "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".
SDLC is not a methodology per se, but rather a description of the phases that a methodology should address. The list of phases is not definitive, but typically includes planning, analysis, design, build, test, implement, and maintenance/support. In the Scrum framework, for example, one could say a single user story goes through all the phases of the SDLC within a two-week sprint. By contrast the waterfall methodology, where every business requirement is translated into feature/functional descriptions which are then all implemented typically over a period of months or longer.
== History ==
According to Elliott (2004), SDLC "originated in the 1960s, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".
The structured systems analysis and design method (SSADM) was produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".
== Models ==
SDLC provides a set of phases/steps/activities for system designers and developers to follow. Each phase builds on the results of the previous one. Not every project requires that the phases be sequential. For smaller, simpler projects, phases may be combined/overlap.
=== Waterfall ===
The oldest and best known is the waterfall model, which uses a linear sequence of steps. Waterfall has different varieties. One variety is as follows:
==== Preliminary analysis ====
Conduct with a preliminary analysis, consider alternative solutions, estimate costs and benefits, and submit a preliminary plan with recommendations.
Conduct preliminary analysis: Identify the organization's objectives and define the nature and scope of the project. Ensure that the project fits with the objectives.
Consider alternative solutions: Alternatives may come from interviewing employees, clients, suppliers, and consultants, as well as competitive analysis.
Cost-benefit analysis: Analyze the costs and benefits of the project.
==== Systems analysis, requirements definition ====
Decompose project goals into defined functions and operations. This involves gathering and interpreting facts, diagnosing problems, and recommending changes. Analyze end-user information needs and resolve inconsistencies and incompleteness:
Collect facts: Obtain end-user requirements by document review, client interviews, observation, and questionnaires.
Scrutinize existing system(s): Identify pros and cons.
Analyze the proposed system: Find solutions to issues and prepare specifications, incorporating appropriate user proposals.
==== Systems design ====
At this step, desired features and operations are detailed, including screen layouts, business rules, process diagrams, pseudocode, and other deliverables.
==== Development ====
Write the code.
==== Integration and testing ====
Assemble the modules in a testing environment. Check for errors, bugs, and interoperability.
==== Acceptance, installation, deployment ====
Put the system into production. This may involve training users, deploying hardware, and loading information from the prior system.
==== Maintenance ====
Monitor the system to assess its ongoing fitness. Make modest changes and fixes as needed. To maintain the quality of the system. Continual monitoring and updates ensure the system remains effective and high-quality.
==== Evaluation ====
The system and the process are reviewed. Relevant questions include whether the newly implemented system meets requirements and achieves project goals, whether the system is usable, reliable/available, properly scaled and fault-tolerant. Process checks include review of timelines and expenses, as well as user acceptance.
==== Disposal ====
At end of life, plans are developed for discontinuing the system and transitioning to its replacement. Related information and infrastructure must be repurposed, archived, discarded, or destroyed, while appropriately protecting security.
In the following diagram, these stages are divided into ten steps, from definition to creation and modification of IT work products:
=== Systems analysis and design ===
Systems analysis and design (SAD) can be considered a meta-development activity, which serves to set the stage and bound the problem. SAD can help balance competing high-level requirements. SAD interacts with distributed enterprise architecture, enterprise I.T. Architecture, and business architecture, and relies heavily on concepts such as partitioning, interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system description. This high-level description is then broken down into the components and modules which can be analyzed, designed, and constructed separately and integrated to accomplish the business goal. SDLC and SAD are cornerstones of full life cycle product and system planning.
=== Object-oriented analysis and design ===
Object-oriented analysis and design (OOAD) is the process of analyzing a problem domain to develop a conceptual model that can then be used to guide development. During the analysis phase, a programmer develops written requirements and a formal vision document via interviews with stakeholders.
The conceptual model that results from OOAD typically consists of use cases, and class and interaction diagrams. It may also include a user interface mock-up.
An output artifact does not need to be completely defined to serve as input of object-oriented design; analysis and design may occur in parallel. In practice the results of one activity can feed the other in an iterative process.
Some typical input artifacts for OOAD:
Conceptual model: A conceptual model is the result of object-oriented analysis. It captures concepts in the problem domain. The conceptual model is explicitly independent of implementation details.
Use cases: A use case is a description of sequences of events that, taken together, complete a required task. Each use case provides scenarios that convey how the system should interact with actors (users). Actors may be end users or other systems. Use cases may further elaborated using diagrams. Such diagrams identify the actor and the processes they perform.
System Sequence Diagram: A System Sequence diagrams (SSD) is a picture that shows, for a particular use case, the events that actors generate, their order, including inter-system events.
User interface document: Document that shows and describes the user interface.
Data model: A data model describes how data elements relate to each other. The data model is created before the design phase. Object-oriented designs map directly from the data model. Relational designs are more involved.
=== System lifecycle ===
The system lifecycle is a view of a system or proposed system that addresses all phases of its existence to include system conception, design and development, production and/or construction, distribution, operation, maintenance and support, retirement, phase-out, and disposal.
==== Conceptual design ====
The conceptual design stage is the stage where an identified need is examined, requirements for potential solutions are defined, potential solutions are evaluated, and a system specification is developed. The system specification represents the technical requirements that will provide overall guidance for system design. Because this document determines all future development, the stage cannot be completed until a conceptual design review has determined that the system specification properly addresses the motivating need.
Key steps within the conceptual design stage include:
Need identification
Feasibility analysis
System requirements analysis
System specification
Conceptual design review
==== Preliminary system design ====
During this stage of the system lifecycle, subsystems that perform the desired system functions are designed and specified in compliance with the system specification. Interfaces between subsystems are defined, as well as overall test and evaluation requirements. At the completion of this stage, a development specification is produced that is sufficient to perform detailed design and development.
Key steps within the preliminary design stage include:
Functional analysis
Requirements allocation
Detailed trade-off studies
Synthesis of system options
Preliminary design of engineering models
Development specification
Preliminary design review
For example, as the system analyst of Viti Bank, you have been tasked to examine the current information system. Viti Bank is a fast-growing bank in Fiji. Customers in remote rural areas are finding difficulty to access the bank services. It takes them days or even weeks to travel to a location to access the bank services. With the vision of meeting the customers' needs, the bank has requested your services to examine the current system and to come up with solutions or recommendations of how the current system can be provided to meet its needs.
==== Detail design and development ====
This stage includes the development of detailed designs that brings initial design work into a completed form of specifications. This work includes the specification of interfaces between the system and its intended environment, and a comprehensive evaluation of the systems logistical, maintenance and support requirements. The detail design and development is responsible for producing the product, process and material specifications and may result in substantial changes to the development specification.
Key steps within the detail design and development stage include:
Detailed design
Detailed synthesis
Development of engineering and prototype models
Revision of development specification
Product, process, and material specification
Critical design review
==== Production and construction ====
During the production and/or construction stage the product is built or assembled in accordance with the requirements specified in the product, process and material specifications, and is deployed and tested within the operational target environment. System assessments are conducted in order to correct deficiencies and adapt the system for continued improvement.
Key steps within the product construction stage include:
Production and/or construction of system components
Acceptance testing
System distribution and operation
Operational testing and evaluation
System assessment
==== Utilization and support ====
Once fully deployed, the system is used for its intended operational role and maintained within its operational environment.
Key steps within the utilization and support stage include:
System operation in the user environment
Change management
System modifications for improvement
System assessment
==== Phase-out and disposal ====
Effectiveness and efficiency of the system must be continuously evaluated to determine when the product has met its maximum effective lifecycle. Considerations include: Continued existence of operational need, matching between operational requirements and system performance, feasibility of system phase-out versus maintenance, and availability of alternative systems.
== Phases ==
=== System investigation ===
During this step, current priorities that would be affected and how they should be handled are considered. A feasibility study determines whether creating a new or improved system is appropriate. This helps to estimate costs, benefits, resource requirements, and specific user needs.
The feasibility study should address operational, financial, technical, human factors, and legal/political concerns.
=== Analysis ===
The goal of analysis is to determine where the problem is. This step involves decomposing the system into pieces, analyzing project goals, breaking down what needs to be created, and engaging users to define requirements.
=== Design ===
In systems design, functions and operations are described in detail, including screen layouts, business rules, process diagrams, and other documentation. Modular design reduces complexity and allows the outputs to describe the system as a collection of subsystems.
The design stage takes as its input the requirements already defined. For each requirement, a set of design elements is produced.
Design documents typically include functional hierarchy diagrams, screen layouts, business rules, process diagrams, pseudo-code, and a complete data model with a data dictionary. These elements describe the system in sufficient detail that developers and engineers can develop and deliver the system with minimal additional input.
=== Testing ===
The code is tested at various levels in software testing. Unit, system, and user acceptance tests are typically performed. Many approaches to testing have been adopted.
The following types of testing may be relevant:
Path testing
Data set testing
Unit testing
System testing
Integration testing
Black-box testing
White-box testing
Regression testing
Automation testing
User acceptance testing
Software performance testing
=== Training and transition ===
Once a system has been stabilized through testing, SDLC ensures that proper training is prepared and performed before transitioning the system to support staff and end users. Training usually covers operational training for support staff as well as end-user training.
After training, systems engineers and developers transition the system to its production environment.
=== Operations and maintenance ===
Maintenance includes changes, fixes, and enhancements.
=== Evaluation ===
The final phase of the SDLC is to measure the effectiveness of the system and evaluate potential enhancements.
== Life cycle ==
=== Management and control ===
SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives while executing projects. Control objectives are clear statements of the desired result or purpose and should be defined and monitored throughout a project. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.
To manage and control a substantial SDLC initiative, a work breakdown structure (WBS) captures and schedules the work. The WBS and all programmatic material should be kept in the "project description" section of the project notebook. The project manager chooses a WBS format that best describes the project.
The diagram shows that coverage spans numerous phases of the SDLC but the associated MCD (Management Control Domains) shows mappings to SDLC phases. For example, Analysis and Design is primarily performed as part of the Acquisition and Implementation Domain, and System Build and Prototype is primarily performed as part of delivery and support.
=== Work breakdown structured organization ===
The upper section of the WBS provides an overview of the project scope and timeline. It should also summarize the major phases and milestones. The middle section is based on the SDLC phases. WBS elements consist of milestones and tasks to be completed rather than activities to be undertaken and have a deadline. Each task has a measurable output (e.g., analysis document). A WBS task may rely on one or more activities (e.g. coding). Parts of the project needing support from contractors should have a statement of work (SOW). The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by contractors.
=== Baselines ===
Baselines are established after four of the five phases of the SDLC, and are critical to the iterative nature of the model. Baselines become milestones.
functional baseline: established after the conceptual design phase.
allocated baseline: established after the preliminary design phase.
product baseline: established after the detail design and development phase.
updated product baseline: established after the production construction phase.
== Alternative methodologies ==
Alternative software development methods to systems development life cycle are:
Software prototyping
Joint applications development (JAD)
Rapid application development (RAD)
Extreme programming (XP);
Open-source development
End-user development
Object-oriented programming
== Strengths and weaknesses ==
Fundamentally, SDLC trades flexibility for control by imposing structure. It is more commonly used for large scale projects with many developers.
== See also ==
Application lifecycle management
Decision cycle
IPO model
Software development methodologies
== References ==
== Further reading ==
Cummings, Haag (2006). Management Information Systems for the Information Age. Toronto, McGraw-Hill Ryerson
Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke. ISBN 978-0-230-20368-6
Computer World, 2002, Retrieved on June 22, 2006, from the World Wide Web:
Management Information Systems, 2005, Retrieved on June 22, 2006, from the World Wide Web:
== External links ==
The Agile System Development Lifecycle
Pension Benefit Guaranty Corporation – Information Technology Solutions Lifecycle Methodology
DoD Integrated Framework Chart IFC (front, back)
FSA Life Cycle Framework
HHS Enterprise Performance Life Cycle Framework
The Open Systems Development Life Cycle
System Development Life Cycle Evolution Modeling
Zero Deviation Life Cycle
Integrated Defense AT&L Life Cycle Management Chart, the U.S. DoD form of this concept. | Wikipedia/Systems_development_lifecycle |
NFR (Non-Functional Requirements) need a framework for compaction. The analysis begins with softgoals that represent NFR which stakeholders agree upon. Softgoals are goals that are hard to express, but tend to be global qualities of a software system. These could be usability, performance, security and flexibility in a given system. If the team starts collecting them it often finds a great many of them. In order to reduce the number to a manageable quantity, structuring is a valuable approach. There are several frameworks available that are useful as structure.
== Structuring Non-functional requirements ==
The following frameworks are useful to serve as structure for NFRs:
1. Goal Modelling
The finalised softgoals are then usually decomposed and refined to uncover a tree structure of goals and subgoals for e.g. the flexibility softgoal. Once uncovering tree structures, one is bound to find interfering softgoals in different trees, e.g. security goals generally interferes with usability. These softgoal trees now form a softgoal graph structure. The final step in this analysis is to pick some particular leaf softgoals, so that all the root softgoals are satisfied.[1]
2. IVENA - Integrated Approach to Acquisition of NFR
The method has integrated a requirement tree. [2]
3. Context of an Organization
There are several models to describe the context of an organization such as Business Model Canvas, OrgManle [3], or others [4]. Those models are also a good framework to assign NFRs.
== Measuring the Non-functional requirements ==
SNAP is the Software Non-functional Assessment Process. While Function Points measure the functional requirements by sizing the data flow through a software application, IFPUG's SNAP measures the non-functional requirements.
The SNAP model consists of four categories and fourteen sub-categories to measure the non-functional requirements. Non-functional requirement are mapped to the relevant sub-categories. Each sub-category is sized, and the size of a requirement is the sum of the sizes of its sub-categories.
The SNAP sizing process is very similar to the Function Point sizing process. Within the application boundary, non-functional requirements are associated with relevant categories and their sub-categories. Using a standardized set of basic criteria, each of the sub-categories is then sized according to its type and complexity; the size of such a requirement is the sum of the sizes of its sub-categories. These sizes are then totaled to give the measure of non-functional size of the software application.
Beta testing of the model shows that SNAP size has a strong correlation with the work effort required to develop the non-functional portion of the software application.
== See also ==
SNAP Points
== References ==
[1] Mylopoulos, Chung, and Yu: “From Object-oriented to Goal-oriented Requirements Analysis" Communications of the ACM, January 1999
[CACM.f.doc [1]
[2] Götz, Rolf; Scharnweber, Heiko: "IVENA: Integriertes Vorgehen zur Erhebung nichtfunktionaler Anforderungen". https://www.pst.ifi.lmu.de/Lehre/WS0102/architektur/VL1/Ivena.pdf
[3] Teich, Irene: Tutorial PlanMan. Working paper Postbauer-Heng, Germany 2005. Available on Demand.
[4] Teich, Irene: Context of the organization-Models. Working paper Meschede, Germany 2020. Available on Demand. | Wikipedia/Non-Functional_Requirements_framework |
In software development, the stability model (SM) is a method for designing and modelling software. It is an extension of object-oriented software design (OOSD) methodology, such as Unified Modeling Language (UML), but adds its own set of rules, guidelines, procedures, and heuristics to achieve more advanced object-oriented (OO) software.
The motivation is to achieve a higher level of OO features, such as
Stability: the objects will be stable over time and will not need changes
Reusability: the objects can be reused for various kind of applications
Maintainability: the objects will need the least amount of maintenance
== Principles ==
The design tries to make use of common sense while guiding through the process of SM based design. It will need minimum amount of rampup time for people to understand new applications and objects once the process and methodology is kept in mind.
The stability model is built using three main concepts -
Enduring business themes (EBT)
Business objects (BO)
Industrial objects (IO)
== History ==
The SM method of OOSD was formulated by Dr Mohamed Fayad. He has been the editor in chief of the Computer magazine of the IEEE for many years. He has taught OOSD in two US universities and has written and currently writing few books on this subject.
== References ==
== Bibliography ==
"BRAVERY STABLE ARCHITECTURAL PATTERN" (PDF). 2010. Archived from the original (PDF) on 2015-11-17. Retrieved November 13, 2015.
== External links ==
Homepage of Dr. Mohamed Fayad at the Wayback Machine (archived 2021-04-12) | Wikipedia/Stability_model |
High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period.
There is now more dependence on these systems as a result of modernization. For example, to carry out their regular daily tasks, hospitals and data centers need their systems to be highly available. Availability refers to the ability of the user to access a service or system, whether to submit new work, update or modify existing work, or retrieve the results of previous work. If a user cannot access the system, it is considered unavailable from the user's perspective. The term downtime is generally used to refer to describe periods when a system is unavailable.
== Resilience ==
High availability is a property of network resilience, the ability to "provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." Threats and challenges for services can range from simple misconfiguration over large scale natural disasters to targeted attacks. As such, network resilience touches a very wide range of topics. In order to increase the resilience of a given communication network, the probable challenges and risks have to be identified and appropriate resilience metrics have to be defined for the service to be protected.
The importance of network resilience is continuously increasing, as communication networks are becoming a fundamental component in the operation of critical infrastructures. Consequently, recent efforts focus on interpreting and improving network and computing resilience with applications to critical infrastructures. As an example, one can consider as a resilience objective the provisioning of services over the network, instead of the services of the network itself. This may require coordinated response from both the network and from the services running on top of the network.
These services include:
supporting distributed processing
supporting network storage
maintaining service of communication services such as
video conferencing
instant messaging
online collaboration
access to applications and data as needed
Resilience and survivability are interchangeably used according to the specific context of a given study.
== Principles ==
There are three principles of systems design in reliability engineering that can help achieve high availability.
Elimination of single points of failure. This means adding or building redundancy into the system so that failure of a component does not mean failure of the entire system.
Reliable crossover. In redundant systems, the crossover point itself tends to become a single point of failure. Reliable systems must provide for reliable crossover.
Detection of failures as they occur. If the two principles above are observed, then a user may never see a failure – but the maintenance activity must.
== Scheduled and unscheduled downtime ==
A distinction can be made between scheduled and unscheduled downtime. Typically, scheduled downtime is a result of maintenance that is disruptive to system operation and usually cannot be avoided with a currently installed system design. Scheduled downtime events might include patches to system software that require a reboot or system configuration changes that only take effect upon a reboot. In general, scheduled downtime is usually the result of some logical, management-initiated event. Unscheduled downtime events typically arise from some physical event, such as a hardware or software failure or environmental anomaly. Examples of unscheduled downtime events include power outages, failed CPU or RAM components (or possibly other failed hardware components), an over-temperature related shutdown, logically or physically severed network connections, security breaches, or various application, middleware, and operating system failures.
If users can be warned away from scheduled downtimes, then the distinction is useful. But if the requirement is for true high availability, then downtime is downtime whether or not it is scheduled.
Many computing sites exclude scheduled downtime from availability calculations, assuming that it has little or no impact upon the computing user community. By doing this, they can claim to have phenomenally high availability, which might give the illusion of continuous availability. Systems that exhibit truly continuous availability are comparatively rare and higher priced, and most have carefully implemented specialty designs that eliminate any single point of failure and allow online hardware, network, operating system, middleware, and application upgrades, patches, and replacements. For certain systems, scheduled downtime does not matter, for example, system downtime at an office building after everybody has gone home for the night.
== Percentage calculation ==
Availability is usually expressed as a percentage of uptime in a given year. The following table shows the downtime that will be allowed for a particular percentage of availability, presuming that the system is required to operate continuously. Service level agreements often refer to monthly downtime or availability in order to calculate service credits to match monthly billing cycles. The following table shows the translation from a given availability percentage to the corresponding amount of time a system would be unavailable.
The terms uptime and availability are often used interchangeably but do not always refer to the same thing. For example, a system can be "up" with its services not "available" in the case of a network outage. Or a system undergoing software maintenance can be "available" to be worked on by a system administrator, but its services do not appear "up" to the end user or customer. The subject of the terms is thus important here: whether the focus of a discussion is the server hardware, server OS, functional service, software service/process, or similar, it is only if there is a single, consistent subject of the discussion that the words uptime and availability can be used synonymously.
=== Five-by-five mnemonic ===
A simple mnemonic rule states that 5 nines allows approximately 5 minutes of downtime per year. Variants can be derived by multiplying or dividing by 10: 4 nines is 50 minutes and 3 nines is 500 minutes. In the opposite direction, 6 nines is 0.5 minutes (30 sec) and 7 nines is 3 seconds.
=== "Powers of 10" trick ===
Another memory trick to calculate the allowed downtime duration for an "
n
{\displaystyle n}
-nines" availability percentage is to use the formula
8.64
×
10
4
−
n
{\displaystyle 8.64\times 10^{4-n}}
seconds per day.
For example, 90% ("one nine") yields the exponent
4
−
1
=
3
{\displaystyle 4-1=3}
, and therefore the allowed downtime is
8.64
×
10
3
{\displaystyle 8.64\times 10^{3}}
seconds per day.
Also, 99.999% ("five nines") gives the exponent
4
−
5
=
−
1
{\displaystyle 4-5=-1}
, and therefore the allowed downtime is
8.64
×
10
−
1
{\displaystyle 8.64\times 10^{-1}}
seconds per day.
=== "Nines" ===
Percentages of a particular order of magnitude are sometimes referred to by the number of nines or "class of nines" in the digits. For example, electricity that is delivered without interruptions (blackouts, brownouts or surges) 99.999% of the time would have 5 nines reliability, or class five. In particular, the term is used in connection with mainframes or enterprise computing, often as part of a service-level agreement.
Similarly, percentages ending in a 5 have conventional names, traditionally the number of nines, then "five", so 99.95% is "three nines five", abbreviated 3N5. This is casually referred to as "three and a half nines", but this is incorrect: a 5 is only a factor of 2, while a 9 is a factor of 10, so a 5 is 0.3 nines (per below formula:
log
10
2
≈
0.3
{\displaystyle \log _{10}2\approx 0.3}
): 99.95% availability is 3.3 nines, not 3.5 nines. More simply, going from 99.9% availability to 99.95% availability is a factor of 2 (0.1% to 0.05% unavailability), but going from 99.95% to 99.99% availability is a factor of 5 (0.05% to 0.01% unavailability), over twice as much.
A formulation of the class of 9s
c
{\displaystyle c}
based on a system's unavailability
x
{\displaystyle x}
would be
c
:=
⌊
−
log
10
x
⌋
{\displaystyle c:=\lfloor -\log _{10}x\rfloor }
(cf. Floor and ceiling functions).
A similar measurement is sometimes used to describe the purity of substances.
In general, the number of nines is not often used by a network engineer when modeling and measuring availability because it is hard to apply in formula. More often, the unavailability expressed as a probability (like 0.00001), or a downtime per year is quoted. Availability specified as a number of nines is often seen in marketing documents. The use of the "nines" has been called into question, since it does not appropriately reflect that the impact of unavailability varies with its time of occurrence. For large amounts of 9s, the "unavailability" index (measure of downtime rather than uptime) is easier to handle. For example, this is why an "unavailability" rather than availability metric is used in hard disk or data link bit error rates.
Sometimes the humorous term "nine fives" (55.5555555%) is used to contrast with "five nines" (99.999%), though this is not an actual goal, but rather a sarcastic reference to something totally failing to meet any reasonable target.
== Measurement and interpretation ==
Availability measurement is subject to some degree of interpretation. A system that has been up for 365 days in a non-leap year might have been eclipsed by a network failure that lasted for 9 hours during a peak usage period; the user community will see the system as unavailable, whereas the system administrator will claim 100% uptime. However, given the true definition of availability, the system will be approximately 99.9% available, or three nines (8751 hours of available time out of 8760 hours per non-leap year). Also, systems experiencing performance problems are often deemed partially or entirely unavailable by users, even when the systems are continuing to function. Similarly, unavailability of select application functions might go unnoticed by administrators yet be devastating to users – a true availability measure is holistic.
Availability must be measured to be determined, ideally with comprehensive monitoring tools ("instrumentation") that are themselves highly available. If there is a lack of instrumentation, systems supporting high volume transaction processing throughout the day and night, such as credit card processing systems or telephone switches, are often inherently better monitored, at least by the users themselves, than systems which experience periodic lulls in demand.
An alternative metric is mean time between failures (MTBF).
== Closely related concepts ==
Recovery time (or estimated time of repair (ETR), also known as recovery time objective (RTO) is closely related to availability, that is the total time required for a planned outage or the time required to fully recover from an unplanned outage. Another metric is mean time to recovery (MTTR). Recovery time could be infinite with certain system designs and failures, i.e. full recovery is impossible. One such example is a fire or flood that destroys a data center and its systems when there is no secondary disaster recovery data center.
Another related concept is data availability, that is the degree to which databases and other information storage systems faithfully record and report system transactions. Information management often focuses separately on data availability, or Recovery Point Objective, in order to determine acceptable (or actual) data loss with various failure events. Some users can tolerate application service interruptions but cannot tolerate data loss.
A service level agreement ("SLA") formalizes an organization's availability objectives and requirements.
== Military control systems ==
High availability is one of the primary requirements of the control systems in unmanned vehicles and autonomous maritime vessels. If the controlling system becomes unavailable, the Ground Combat Vehicle (GCV) or ASW Continuous Trail Unmanned Vessel (ACTUV) would be lost.
== System design ==
On one hand, adding more components to an overall system design can undermine efforts to achieve high availability because complex systems inherently have more potential failure points and are more difficult to implement correctly. While some analysts would put forth the theory that the most highly available systems adhere to a simple architecture (a single, high-quality, multi-purpose physical system with comprehensive internal hardware redundancy), this architecture suffers from the requirement that the entire system must be brought down for patching and operating system upgrades. More advanced system designs allow for systems to be patched and upgraded without compromising service availability (see load balancing and failover). High availability requires less human intervention to restore operation in complex systems; the reason for this being that the most common cause for outages is human error.
=== High availability through redundancy ===
On the other hand, redundancy is used to create systems with high levels of availability (e.g. popular ecommerce websites). In this case it is required to have high levels of failure detectability and avoidance of common cause failures.
If redundant parts are used in parallel and have independent failure (e.g. by not being within the same data center), they can exponentially increase the availability and make the overall system highly available. If you have N parallel components each having X availability, then you can use following formula:
Availability of parallel components = 1 - (1 - X)^ N
So for example if each of your components has only 50% availability, by using 10 of components in parallel, you can achieve 99.9023% availability.
Two kinds of redundancy are passive redundancy and active redundancy.
Passive redundancy is used to achieve high availability by including enough excess capacity in the design to accommodate a performance decline. The simplest example is a boat with two separate engines driving two separate propellers. The boat continues toward its destination despite failure of a single engine or propeller. A more complex example is multiple redundant power generation facilities within a large system involving electric power transmission. Malfunction of single components is not considered to be a failure unless the resulting performance decline exceeds the specification limits for the entire system.
Active redundancy is used in complex systems to achieve high availability with no performance decline. Multiple items of the same kind are incorporated into a design that includes a method to detect failure and automatically reconfigure the system to bypass failed items using a voting scheme. This is used with complex computing systems that are linked. Internet routing is derived from early work by Birman and Joseph in this area. Active redundancy may introduce more complex failure modes into a system, such as continuous system reconfiguration due to faulty voting logic.
Zero downtime system design means that modeling and simulation indicates mean time between failures significantly exceeds the period of time between planned maintenance, upgrade events, or system lifetime. Zero downtime involves massive redundancy, which is needed for some types of aircraft and for most kinds of communications satellites. Global Positioning System is an example of a zero downtime system.
Fault instrumentation can be used in systems with limited redundancy to achieve high availability. Maintenance actions occur during brief periods of downtime only after a fault indicator activates. Failure is only significant if this occurs during a mission critical period.
Modeling and simulation is used to evaluate the theoretical reliability for large systems. The outcome of this kind of model is used to evaluate different design options. A model of the entire system is created, and the model is stressed by removing components. Redundancy simulation involves the N-x criteria. N represents the total number of components in the system. x is the number of components used to stress the system. N-1 means the model is stressed by evaluating performance with all possible combinations where one component is faulted. N-2 means the model is stressed by evaluating performance with all possible combinations where two component are faulted simultaneously.
== Reasons for unavailability ==
A survey among academic availability experts in 2010 ranked reasons for unavailability of enterprise IT systems. All reasons refer to not following best practice in each of the following areas (in order of importance):
Monitoring of the relevant components
Requirements and procurement
Operations
Avoidance of network failures
Avoidance of internal application failures
Avoidance of external services that fail
Physical environment
Network redundancy
Technical solution of backup
Process solution of backup
Physical location
Infrastructure redundancy
Storage architecture redundancy
A book on the factors themselves was published in 2003.
== Costs of unavailability ==
In a 1998 report from IBM Global Services, unavailable systems were estimated to have cost American businesses $4.54 billion in 1996, due to lost productivity and revenues.
== See also ==
Availability
Fault tolerance
High-availability cluster
Overall equipment effectiveness
Reliability, availability and serviceability
Responsiveness
Scalability
Ubiquitous computing
== Notes ==
== References ==
== External links ==
Lecture Notes on Enterprise Computing Archived November 16, 2013, at the Wayback Machine University of Tübingen
Lecture notes on Embedded Systems Engineering by Prof. Phil Koopman
Uptime Calculator (SLA) | Wikipedia/Resilience_(network) |
Stages-of-growth model is a theoretical model for the growth of information technology (IT) in a business or similar organization. It was developed by Richard L. Nolan during the early 1970s, and with the final version of the model published by him in the Harvard Business Review in 1979.
== Development ==
Both articles describing the stages were first published in the Harvard Business Review. The first proposal was made in 1973 and consisted of only four stages. Two additional stages were added in 1979 to complete his six-stage model.
== Summary ==
Nolan's model concerns the general approach to IT in business. The model proposes that evolution of IT in organizations begins slowly in Stage I, the "initiation" stage. This stage is marked by "hands off" user awareness and an emphasis on functional applications to reduce costs. Stage I is followed by further growth of IT in the "contagion" stage. In this stage there is a proliferation of applications as well as the potential for more problems to arise. During Stage III a need for "control" arises. Centralized controls are put in place and a shift occurs from management of computers to management of data resources. Next, in Stage IV, "integration" of diverse technological solutions evolves. Management of data allows development without increasing IT expenditures in Stage V. Finally, in Stage VI, "maturity", high control is exercised by using all the information from the previous stages.
=== Stage I – Initiation ===
In this stage, information technology is first introduced into the organization. According to Nolan's article in 1973, computers were introduced into companies for two reasons. The first reason deals with the company reaching a size where the administrative processes cannot be accomplished without computers. Also, the success of the business justifies large investment in specialized equipment. The second reason deals with computational needs. Nolan defined the critical size of the company as the most prevalent reason for computer acquisition. Due to the unfamiliarity of personnel with the technology, users tend to take a "hands off" approach to new technology. This introductory software is simple to use and cheap to implement, which provides substantial monetary savings to the company. During this stage, the IT department receives little attention from management and work in a "carefree" atmosphere.
Stage I Key points:
User awareness is characterized as being "hands off".
IT personnel are "specialized for technological learning".
IT planning and control is not extensive.
There is an emphasis on functional applications to reduce costs.
=== Stage II – Contagion ===
Even though the computers are recognised as “change agents” in Stage I, Nolan acknowledged that many users become alienated by computing. Because of this, Stage II is characterised by a managerial need to explain the potential of computer applications to alienated users. This leads to the adoption of computers in a range of different areas. A problem that arises in Stage II is that project and budgetary controls are not developed. Unavoidably, this leads to a saturation of existing computer capacity and more sophisticated computer systems being obtained. System sophistication requires employing specialised professionals. Due to the shortage of qualified individuals, implementing these employees results in high salaries. The budget for computer organisation rises significantly and causes concern for management. Although the price of Stage II is high, it is evident that planning and control of computer systems is necessary.
Stage II Key points:
There is a proliferation of applications.
Users are superficially enthusiastic about using data processing.
Management control is even more relaxed.
There is a rapid growth of budgets.
Treatment of the computer by management is primarily as just a machine.
Rapid growth of computer use occurs throughout the organisations' functional areas.
Computer use is plagued by crisis after crisis.
=== Stage III – Control ===
Stage III is a reaction against excessive and uncontrolled expenditures of time and money spent on computer systems, and the major problem for management is the organization of tasks for control of computer operating costs. In this stage, project management and management report systems are organized, which leads to development of programming, documentation, and operation standards. During Stage III, a shift occurs from management of computers to management of data resources. This shift is an outcome of analysis of how to increase management control and planning in expending data processing operations. Also, the shift provides flexibility in data processing that is needed in a case of management's new controls. The major characteristic of Stage III is reconstruction of data processing operation.
Stage III Key points:
There is no reduction in computer use.
IT division's importance to the organization is greater.
Centralized controls are put in place.
Applications are often incompatible or inadequate.
There is use of database and communications, often with negative general management reaction.
End user frustration is often the outcome.
=== Stage IV – Integration ===
Stage IV features the adoption of new technology to integrate systems that were previously separate entities. This creates data processing (IT) expenditure growth rates similar to that of Stage II. In the latter half of Stage IV, exclusive reliance on computer controls leads to inefficiencies. The inefficiencies associated with rapid growth may create another wave of problems simultaneously. This is the last stage that Nolan acknowledged in his initial proposal of the stages of growth in 1973.
Stage IV Key points:
There is rise of control by the users.
A larger data processing budget growth exists.
There is greater demand for on-line database facilities.
Data processing department now operates like a computer utility.
There is formal planning and control within data processing.
Users are more accountable for their applications.
The use of steering committees, applications financial planning becomes important.
Data processing has better management controls and set standards.
=== Stage V – Data administration ===
Nolan determined that four stages were not enough to describe the proliferation of IT in an organization and added Stage V in 1979. Stage V features a new emphasis on managing corporate data rather than IT. Like the proceeding Stage VI, it is marked by the development and maturity of the new concept of data administration.
Stage V Key points:
Data administration is introduced.
There is identification of data similarities, its usage, and its meanings within the whole organization.
The applications portfolio is integrated into the organization.
Data processing department now serves more as an administrator of data resources than of machines.
A key difference is the use of term IT/IS rather than data processing..
=== Stage VI – Maturity ===
In Stage VI, the application portfolio — tasks like orderly entry, general ledger, and material requirements planning — is completed and its structure “mirrors” the organization and information flows in the company. During this stage, tracking sales growth becomes an important aspect. On the average, 10% batch and remote job entry, 60% are dedicated to data base and data communications processing, 5% personal computing, 25% minicomputer processing. Management control systems are used the most in Stage VI (40%). There are three aspects of management control; manufacturing, marketing and financial. Manufacturing control demands forecasting — looking down the road for future needs. Marketing control strictly deals with research. Financial control, forecasts cash requirements for the future. Stage VI exercises high control, by compiling all of the information from Stages I through V. This allows the organization to function at high levels of efficiency and effectiveness.
Stage VI Key points:
Systems now reflect the real information needs of the organization.
Greater use of data resources to develop competitive and opportunistic applications.
Data processing organisation is viewed solely as a data resource function.
Data processing now emphasizes data resource strategic planning.
Ultimately, users and DP department jointly responsible for the use of data resources within the organization.
Manager of IT system takes on the same importance in the organizational hierarchy as say the director of finance or director of HR
== Initial reaction ==
Richard Nolan's Stages of Growth Model seemed ahead of its time when it was first published in the 1970s.
== Legacy ==
Critics agree that Nolan's model presents several shortcomings and is slightly out of date. As time has progressed, Richard Nolan's Stages of Growth Model has revealed some apparent weaknesses. However, many agree that this does not take away from his innovative look into the realm of computing development.
== Criticism ==
An argument posed dealt with the main focus on the change in budget, and whether it is “reasonable to assume that a single variable serves as a suitable surrogate for so much.” It seems logical that this single variable could be an indicator of other variables such as the organizational environment or an organization's learning curve, but not that it is the sole driving force of the entire model. Nolan shows little connection that would make his initial point a valid one.
In his model, Richard Nolan states that the force behind the growth of computing through the stages is technological change. King and Kramer find this to be far too general as they say, “there are additional factors that should be considered. Most important are the "demand-side" factors that create a ripe environment for technological changes to be considered and adopted.” As proposed, technological change has a multitude of facets that determine its necessity. Change cannot be brought forth unless it is needed under certain circumstances. Unwarranted change would result in excess costs and potential failure of the process.
Last, the stages of growth model assumes straightforward organizational goals that are to be determined through the technological change. This can be viewed as very naïve from the user perspective. King and Kraemer state, “the question of whether organizational goals are uniform and consistent guides for the behavior of organizational actors, as opposed to dynamic and changing targets that result from competition and conflict among organizational actors, has received considerable attention in the literature on computing.” Clearly, organizational goals are ever changing and sometimes rigid indicators of direction. They cannot be “uniform” objectives that are not subject to change.
== References ==
== Further reading ==
Nolan, R.L.(1979),"Managing the crises in data processing", HBR, March–April 1979, p115 | Wikipedia/Stages_of_growth_model |
A maturity model is a framework for measuring an organization's maturity, or that of a business function within an organization, with maturity being defined as a measurement of the ability of an organization for continuous improvement in a particular discipline (as defined in O-ISM3 ). The higher the maturity, the higher will be the chances that incidents or errors will lead to improvements either in the quality or in the use of the resources of the discipline as implemented by the organization.
Most maturity models assess qualitatively people/culture, processes/structures, and objects/technology.
Two approaches to implementing maturity models exist. With a top-down approach, such as proposed by Becker et al., a fixed number of maturity stages or levels is specified first and further corroborated with characteristics (typically in form of specific assessment items) that support the initial assumptions about how maturity evolves. When using a bottom-up approach, such as suggested by Lahrmann et al., distinct characteristics or assessment items are determined first and clustered in a second step into maturity levels to induce a more general view of the different steps of maturity evolution.
== Notable models ==
=== Analytics ===
Big data maturity model
=== Cybersecurity ===
Cybersecurity Maturity Model Certification (CMMC)
=== Human resources ===
People Capability Maturity Model (PCMM) (for the management of human assets)
=== Information security management ===
Open Information Security Maturity Model (O-ISM3)
=== Information technology ===
Capability Maturity Model (CMM, focusing on software development, largely superseded by CMMI)
Open Source Maturity Model (for open-source software development)
Service Integration Maturity Model (for SOA)
Modeling Maturity Levels (for software specification)
Darwin Information Typing Architecture (DITA) Maturity Model
Richardson Maturity Model (for HTTP-based web services)
ISO/IEC 15504 (for process maturity, deprecated)
ISO/IEC 33000 series (for Information technology Process assessment)
=== Project management ===
OPM3 (Organisational Project Management Maturity Model)
P3M3 (Portfolio, Programme and Project Management Maturity Model)
=== Quality management ===
Quality Management Maturity Grid (QMMG)
=== Testing ===
Testing Maturity Model (TMM) (assessing test processes in an organization)
=== Universal ===
Capability Maturity Model Integration (CMMI)
== References == | Wikipedia/Maturity_model |
The Testing Maturity Model (TMM) was based on the Capability Maturity Model, and first produced by the Illinois Institute of Technology.
Its aim to be used in a similar way to CMM, that is to provide a framework for assessing the maturity of the test processes in an organisation, and so providing targets on improving maturity.
Each level from 2 upwards has a defined set of processes and goals, which lead to practices and sub-practices.
The TMM has been since replaced by the Test Maturity Model integration and is now managed by the TMMI Foundation.
== See also ==
Enterprise Architecture Assessment Framework
== References ==
The article describing this concept was first published in: Crosstalk, August and September 1996
"Developing a Testing Maturity Model: Parts I and II", Ilene Burnstein, Taratip Suwannasart, and C.R. Carlson,
Illinois Institute of Technology (article not in online archives at Crosstalk online anymore) | Wikipedia/Testing_Maturity_Model |
Design Review (originally titled New Zealand Design Review) is the journal of the Architectural Centre Incorporated, Wellington. The Centre was founded in 1946, and began the first architectural school in Wellington (1947) and the first town planning school in New Zealand (1949). The Centre was unique at the time of its founding in that it invited members interested in a broad range of design and the arts, rather than restricting membership to professional architects and architectural students. Internationally, it is one of the oldest organisations of its type.
== Philosophy and Scope ==
The Centre began the two-monthly publication of New Zealand Design Review in 1948. The journal addressed design topics as broad as furniture, town planning, theatre and stage design, packaging, church design, book-binding, poster design, industrial design and of course architecture. It hence reflected the Centre's interest in architecture, design and the arts in the broadest sense and was the first journal of its kind in New Zealand. The editorial of April – May 1949 explicitly asked the question "What is Design?", answering it with:
Like everything that has to do with the arts, design cannot be tested for its quality in a laboratory ... The elusive quality that a consensus of opinion agrees to call good design is not to be defined in terms like an axiom in geometry ... So we will leave the making of formulas and rules to those who like that sort of thing ... we shall publish in each number a discussion on some particular object; a house, a chair, a teapot or what have you. The contributor will tell you his or her opinion about the merits or demerits of the way that thing is designed, omitting any waving of the big stick to lay down laws of design. It is for you to decide if you think they are right."
== Contributors ==
Contributors included E. C. Simpson, Doreen Blumhardt, Helen Hitchings, E.H. McCormick, Odo Strewe, William Toomath, Gordon Wilson, Anna Plischke, Geoff Nees, film maker John O'Shea, writer Alan Mulgan, and photographers John Pascoe and Irene Koppel. Editors of the journal included architects Ernst Plischke, Maurice Patience, and Al Gabites, and well-known artists E. Mervyn Taylor, and Eric Lee-Johnson. The final issue of Design Review was published in 1954.
== Digitisation ==
The entire journal has been digitised by the New Zealand Electronic Text Centre, a unit of the library at Victoria University of Wellington, and can be viewed online.
== References == | Wikipedia/Design_Review_(publication) |
In sociology, an industrial society is a society driven by the use of technology and machinery to enable mass production, supporting a large population with a high capacity for division of labour. Such a structure developed in the Western world in the period of time following the Industrial Revolution, and replaced the agrarian societies of the pre-modern, pre-industrial age. Industrial societies are generally mass societies, and may be succeeded by an information society. They are often contrasted with traditional societies.
Industrial societies use external energy sources, such as fossil fuels, to increase the rate and scale of production. The production of food is shifted to large commercial farms where the products of industry, such as combine harvesters and fossil fuel–based fertilizers, are used to decrease required human labor while increasing production. No longer needed for the production of food, excess labor is moved into these factories where mechanization is utilized to further increase efficiency. As populations grow, and mechanization is further refined, often to the level of automation, many workers shift to expanding service industries.
Industrial society makes urbanization desirable, in part so that workers can be closer to centers of production, and the service industry can provide labor to workers and those that benefit financially from them, in exchange for a piece of production profits with which they can buy goods. This leads to the rise of very large cities and surrounding suburb areas with a high rate of economic activity.
These urban centers require the input of external energy sources in order to overcome the diminishing returns of agricultural consolidation, due partially to the lack of nearby arable land, associated transportation and storage costs, and are otherwise unsustainable. This makes the reliable availability of the needed energy resources high priority in industrial government policies.
== Industrial development ==
Prior to the Industrial Revolution in Europe and North America, followed by further industrialization throughout the world in the 20th century, most economies were largely agrarian. Basics were often made within the household and most other manufacturing was carried out in smaller workshops by artisans with limited specialization or machinery.
In Europe during the late Middle Ages, artisans in many towns formed guilds to self-regulate their trades and collectively pursue their business interests. Economic historian Sheilagh Ogilvie has suggested the guilds further restrained the quality and productivity of manufacturing. There is some evidence, however, that even in ancient times, large economies such as the Roman Empire or Chinese Han dynasty had developed factories for more centralized production in certain industries.
With the Industrial Revolution, the manufacturing sector became a major part of European and North American economies, both in terms of labor and production, contributing possibly a third of all economic activity. Along with rapid advances in technology, such as steam power and mass steel production, the new manufacturing drastically reconfigured previously mercantile and feudal economies. Even today, industrial manufacturing is significant to many developed and semi-developed economies.
== Deindustrialisation ==
Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced.
A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This has come with a major shift in labor and production away from manufacturing and towards the service sector, a process dubbed tertiarization. Additionally, since the late 20th century, rapid changes in communication and information technology (sometimes called an information revolution) have allowed sections of some economies to specialize in a quaternary sector of knowledge and information-based services. For these and other reasons, in a post-industrial society, manufacturers can and often do relocate their industrial operations to lower-cost regions in a process known as off-shoring.
Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process.
Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff.
== Industrial policy ==
Today, as industry is an important part of most societies and nations, many governments will have at least some role in planning and regulating industry. This can include issues such as industrial pollution, financing, vocational education, and labour law.
== Industrial labour ==
In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of other union members and negotiates labour contracts with employers. This movement first rose among industrial workers.
== Effects on slavery ==
Ancient Mediterranean cultures relied on slavery throughout their economy. While serfdom largely supplanted the practice in Europe during the Middle Ages, several European powers reintroduced slavery extensively in the early modern period, particularly for the harshest labor in their colonies. The Industrial revolution played a central role in the later abolition of slavery, partly because domestic manufacturing's new economic dominance undercut interests in the slave trade.
Additionally, the new industrial methods required a complex division of labor with less worker supervision, which may have been incompatible with forced labor.
== War ==
The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare.
== Use in 20th century social science and politics ==
“Industrial society” took on a more specific meaning after World War II in the context of the Cold War, the internationalization of sociology through organizations like UNESCO, and the spread of American industrial relations to Europe. The cementation of the Soviet Union’s position as a world power inspired reflection on whether the sociological association of highly-developed industrial economies with capitalism required updating. The transformation of capitalist societies in Europe and the United States to state-managed, regulated welfare capitalism, often with significant sectors of nationalized industry, also contributed to the impression that they might be evolving beyond capitalism, or toward some sort of “convergence” common to all “types” of industrial societies, whether capitalist or communist. State management, automation, bureaucracy, institutionalized collective bargaining, and the rise of the tertiary sector were taken as common markers of industrial society.
The “industrial society” paradigm of the 1950s and 1960s was strongly marked by the unprecedented economic growth in Europe and the United States after World War II, and drew heavily on the work of economists like Colin Clark, John Kenneth Galbraith, W.W. Rostow, and Jean Fourastié. The fusion of sociology with development economics gave the industrial society paradigm strong resemblances to modernization theory, which achieved major influence in social science in the context of postwar decolonization and the development of post-colonial states.
The French sociologist Raymond Aron, who gave the most developed definition to the concept of “industrial society” in the 1950s, used the term as a comparative method to identify common features of the Western capitalist and Soviet-style communist societies. Other sociologists, including Daniel Bell, Reinhard Bendix, Ralf Dahrendorf, Georges Friedmann, Seymour Martin Lipset, and Alain Touraine, used similar ideas in their own work, though with sometimes very different definitions and emphases. The principal notions of industrial-society theory were also commonly expressed in the ideas of reformists in European social-democratic parties who advocated a turn away from Marxism and an end to revolutionary politics.
Because of its association with non-Marxist modernization theory and American anticommunist organizations like the Congress for Cultural Freedom, “industrial society” theory was often criticized by left-wing sociologists and Communists as a liberal ideology that aimed to justify the postwar status quo and undermine opposition to capitalism. However, some left-wing thinkers like André Gorz, Serge Mallet, Herbert Marcuse, and the Frankfurt School used aspects of industrial society theory in their critiques of capitalism.
== Selected bibliography of industrial society theory ==
Adorno, Theodor. "Late Capitalism or Industrial Society?" (1968)
Aron, Raymond. Dix-huit leçons sur la société industrielle. Paris: Gallimard, 1961.
Aron, Raymond. La lutte des classes: nouvelles leçons sur les sociétés industrielles. Paris: Gallimard, 1964.
Bell, Daniel. The End of Ideology: On the Exhaustion of Political Ideas in the Fifties. New York: Free Press, 1960.
Dahrendorf, Ralf. Class and Class Conflict in Industrial Society. Stanford: Stanford University Press, 1959.
Gorz, André. Stratégie ouvrière et néo-capitalisme. Paris: Seuil, 1964.
Friedmann, Georges. Le Travail en miettes. Paris: Gallimard, 1956.
Kaczynski, Theodore J. Industrial Society and Its Future. Berkeley, CA: Jolly Roger Press, 1995.
Kerr, Clark, et al. Industrialism and Industrial Man. Oxford: Oxford University Press, 1960.
Lipset, Seymour Martin. Political Man: The Social Bases of Politics. Garden City, NJ: Doubleday, 1959.
Marcuse, Herbert. One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon Press, 1964.
Touraine, Alain. Sociologie de l'action. Paris: Seuil, 1965.
Bell, Daniel. 'The Coming of Post-Industrial Society. A Venture in Social Forecasting'. New York: Basic Books, 1973.
== See also ==
Developed country
Food industry
Industrialization
North–South divide
Post-industrial society
Western world
Industrial Revolution
Newly industrialized country
Mechanization
Dystopian future
== References == | Wikipedia/Industrial_system |
A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
== Composition ==
The SDD usually contains the following information:
The Data-driven design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures.
The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules.
The interface design describes internal and external program interfaces, as well as the design of the human interface. Internal and external interface designs are based on the information obtained from the analysis model.
The procedural design describes structured programming concepts using graphical, tabular and textual notations.
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
== IEEE 1016 ==
IEEE 1016-2009, titled IEEE Standard for Information Technology—Systems Design—Software Design Descriptions, is an IEEE standard that specifies "the required information content and organization" for an SDD. IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions."
The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled after IEEE Std 1471-2000, Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts of view, viewpoint, stakeholder, and concern from architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016, Introduction]
Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use:
Context viewpoint
Composition viewpoint
Logical viewpoint
Dependency viewpoint
Information viewpoint
Patterns use viewpoint
Interface viewpoint
Structure viewpoint
Interaction viewpoint
State dynamics viewpoint
Algorithm viewpoint
Resource viewpoint
In addition, users of the standard are not limited to these viewpoints but may define their own.
== IEEE status ==
IEEE 1016-2009 is currently listed as 'Inactive - Reserved'.
== See also ==
Game design document
High-level design
Low-level design
== References ==
== External links ==
IEEE 1016 website | Wikipedia/Software_design_description |
A strategic technology plan is a specific type of strategy plan that lets an organization know where it is now and where its wants to be some time in the future with regard to the technology and infrastructure in the organization. It often consists of the following sections.
== Components of strategic technology plan ==
=== Mission, vision statement ===
A mission statement describes the overall purpose of the organization.
An example of a mission statement:
Our mission is to ensure our students have the desire for learning and provide them with the knowledge, skills, and values to become contributing citizens of the world.
A vision statement describes what the organization stands for, what it believes in, and why it exists. It describes the desired outcome, invoking a vivid mental picture of the company's goal.
An example of a vision statement:
The Carrington Public School System is committed to seeing students exercising self-control, being accountable, showing respect, actively learning, inquiring, discussing, questioning, debating, self-motivated, creating, connecting instruction to life, and reflecting and revising.
=== Needs assessment ===
A needs assessment is a systematic exploration of the way things are and the way they should be. These things are usually associated with organizational and/or individual performance.
A needs assessment describes: teaching and learning, integration of technology with business requirements, curricula and instruction, educator preparation and development, administration and support services, infrastructure for technology.
Curriculum integration
When evaluating your needs, consider:
Current curriculum strengths and weaknesses and the process used to determine these strengths and weaknesses
How curriculum strategies are aligned to state standards
Current procedures for using technology to address any perceived curriculum weaknesses
How teachers integrate technology into their lessons
How students use technology
Professional development
When evaluating your needs, consider:
How the organization assesses the technology professional development needs of certified staff, administration, and non-certified staff
Technology professional development training available to certified staff
How the effectiveness of the professional development will be measured.
Equitable use of technology
When evaluating your needs, consider:
Availability of technology to students, staff, employees, and organization members
Amount of time technology is available to students, staff, or organization members
Description of types of assistive technology tools that are provided for students, employees or users with disabilities where necessary/applicable.
Infrastructure for technology
When evaluating your needs, consider:
Current technology infrastructure of the school/organization - explain the type of data and video networking and Internet access that is available
Effectiveness of present infrastructure and telecommunication service provided
How E-Rate has allowed the district to improve or increase technology infrastructure.
Administrative needs
When evaluating your needs, consider:
How administrative staff uses technology, accessing data for decision making, information system reporting, communication tools, information gathering, and record-keeping
Professional development opportunities that are available to administrative staff.
Stakeholders are composed of;
Board members
People affected by technology
Superintendent
Principals/assistants
Teachers/special needs teachers
Parents/students/community members
Director of Technology
Director of Instructional Technology
Teaching and learning design team
=== Technology initiative descriptions, goal statement and rationale ===
In order for technology to be effective, it must be tied to leadership, core visions, professional development, time, and assessment. The following goals are statements of ways with regard to its use of technology:
To improve student academic performance through the integration of curriculum and technology
Increase administrative uses of technology
Utilize technology as a medium to create an interactive partnership between the system, parents, community agencies, industry, and business partners
To utilize technology to support the professional growth of all staff, which will result in maximum learning to all students
=== Objectives (measurable and observable) ===
The objectives are tied to the mission and vision statements. Each goal has objectives and indicators. The objectives stated in specific and measurable terms what must be accomplished in order to reach the larger goal.
The Technology plan splits the objectives into categories:
Teaching and learning
Educator preparation and development
Administration and support services
Infrastructure for technology
Integration of technology with curricula and instruction
=== Hardware, software, and facility resource requirement ===
Only by having the proper equipment can staff development take place. Effective technology plans should not just focus on the technology but also the applications. This will provide teachers and administrators with the information on what they should be able to do with the technology. Professional development is the most important element in implementing technology into the classroom. Teachers must be able to have strategies to change the way they teach and integrate new technology. The only way to do this is to provide the hardware, software and the training. Teachers and administrators should be able to use a variety of technology applications such as: the Internet, video cameras, iPods, and multimedia presentations.
=== Instructional resource requirement and staff development plan ===
In order for an organization to fulfill its mission and goals, it is important that all staff be provided with the necessary support and training opportunities to enable them to undertake their roles to the highest standard. The plan will provide training and educational opportunities for professional and personal development to relate to educational activities. Professional development is a key focus of the No Child Left Behind Act of 2001. The law requires enhancing education through technology entitlement funds to be directed toward professional development that is ongoing and high quality.
=== Itemized budget and rational, evaluation method, and funding ===
In the strategic plan, there is an item budget, evaluation method, and funding source/amount/timeline. The itemized budget is broken down into the years of the plan. Money is divided up into amounts that will be used for each year. The evaluation method collects data and disaggregates it to determine the outcomes.
== See also ==
Information technology planning
Technology life cycle
Technology roadmap
== References ==
== Further reading ==
Fernandez, D. 1999. Texas can & affiliated charter schools.
Rouda, R., Kusy, M. 1995. Needs assessment the first step
McQuillan, M. 2008. Connecticut state department of education
stafdevplan.pdf Staff Development Strategic Plan. 2005. | Wikipedia/Strategic_technology_plan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.