text
stringlengths
128
100k
source
stringlengths
31
152
This is a list of television programs current broadcasts and former broadcasts by Indian television channel Gemini TV. == Former broadcasts == === Other serials === Aanandam Nandini Chandrakumari Ganga Yamuna Saraswati Krishnadasi Maya Vani Rani Naagini Aladdin == Reality shows == == References ==
https://en.wikipedia.org/wiki/List_of_programmes_broadcast_by_Gemini_TV
In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a programming language, as an extension to an existing languages. Consensus around a particular programming model is important because it leads to different parallel computers being built with support for the model, thereby facilitating portability of software. In this sense, programming models are referred to as bridging between hardware and software. == Classification of parallel programming models == Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition. === Process interaction === Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but interaction can also be implicit (invisible to the programmer). ==== Shared memory ==== Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead to race conditions, and mechanisms such as locks, semaphores and monitors can be used to avoid these. Conventional multi-core processors directly support shared memory, which many parallel programming languages and libraries, such as Cilk, OpenMP and Threading Building Blocks, are designed to exploit. ==== Message passing ==== In a message-passing model, parallel processes exchange data through passing messages to one another. These communications can be asynchronous, where a message can be sent before the receiver is ready, or synchronous, where the receiver must be ready. The Communicating sequential processes (CSP) formalisation of message passing uses synchronous communication channels to connect processes, and led to important languages such as Occam, Limbo and Go. In contrast, the actor model uses asynchronous message passing and has been employed in the design of languages such as D, Scala and SALSA. ==== Partitioned global address space ==== Partitioned Global Address Space (PGAS) models provide a middle ground between shared memory and message passing. PGAS provides a global memory address space abstraction that is logically partitioned, where a portion is local to each process. Parallel processes communicate by asynchronously performing operations (e.g. reads and writes) on the global address space, in a manner reminiscent of shared memory models. However by semantically partitioning the global address space into portions with affinity to a particular processes, they allow programmers to exploit locality of reference and enable efficient implementation on distributed memory parallel computers. PGAS is offered by many parallel programming languages and libraries, such as Fortran 2008, Chapel, UPC++, and SHMEM. ==== Implicit interaction ==== In an implicit model, no process interaction is visible to the programmer and instead the compiler and/or runtime is responsible for performing it. Two examples of implicit parallelism are with domain-specific languages where the concurrency within high-level operations is prescribed, and with functional programming languages because the absence of side-effects allows non-dependent functions to be executed in parallel. However, this kind of parallelism is difficult to manage and functional languages such as Concurrent Haskell and Concurrent ML provide features to manage parallelism explicitly and correctly. === Problem decomposition === A parallel program is composed of simultaneously executing processes. Problem decomposition relates to the way in which the constituent processes are formulated. ==== Task parallelism ==== A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. In Flynn's taxonomy, task parallelism is usually classified as MIMD/MPMD or MISD. ==== Data parallelism ==== A data-parallel model focuses on performing operations on a data set, typically a regularly structured array. A set of tasks will operate on this data, but independently on disjoint partitions. In Flynn's taxonomy, data parallelism is usually classified as MIMD/SPMD or SIMD. ==== Stream Parallelism ==== Stream parallelism, also known as pipeline parallelism, focuses on dividing a computation into a sequence of stages, where each stage processes a portion of the input data. Each stage operates independently and concurrently, and the output of one stage serves as the input to the next stage. Stream parallelism is particularly suitable for applications with continuous data streams or pipelined computations. ==== Implicit parallelism ==== As with implicit process interaction, an implicit model of parallelism reveals nothing to the programmer as the compiler, the runtime or the hardware is responsible. For example, in compilers, automatic parallelization is the process of converting sequential code into parallel code, and in computer architecture, superscalar execution is a mechanism whereby instruction-level parallelism is exploited to perform operations in parallel. == Terminology == Parallel programming models are closely related to models of computation. A model of parallel computation is an abstraction used to analyze the cost of computational processes, but it does not necessarily need to be practical, in that it can be implemented efficiently in hardware and/or software. A programming model, in contrast, does specifically imply the practical considerations of hardware and software implementation. A parallel programming language may be based on one or a combination of programming models. For example, High Performance Fortran is based on shared-memory interactions and data-parallel problem decomposition, and Go provides mechanism for shared-memory and message-passing interaction. == Example parallel programming models == == See also == Automatic parallelization Bridging model Concurrency Degree of parallelism Explicit parallelism List of concurrent and parallel programming languages Optical Multi-Tree with Shuffle Exchange Parallel external memory (Model) == References == == Further reading == Blaise Barney, Introduction to Parallel Computing, Lawrence Livermore National Laboratory, archived from the original on 2013-06-10, retrieved 2015-11-22 Murray I. Cole., Algorithmic Skeletons: Structured Management of Parallel Computation (PDF), University of Glasgow J. Darlinton; M. Ghanem; H. W. To (1993). "Structured parallel programming". Proceedings of Workshop on Programming Models for Massively Parallel Computers. pp. 160–169. doi:10.1109/PMMP.1993.315543. ISBN 0-8186-4900-3. S2CID 15265646. {{cite book}}: |journal= ignored (help) Ian Foster, Designing and Building Parallel Programs, Argonne National Laboratory
https://en.wikipedia.org/wiki/Parallel_programming_model
Design by contract (DbC), also known as contract programming, programming by contract and design-by-contract programming, is an approach for designing software. It prescribes that software designers should define formal, precise and verifiable interface specifications for software components, which extend the ordinary definition of abstract data types with preconditions, postconditions and invariants. These specifications are referred to as "contracts", in accordance with a conceptual metaphor with the conditions and obligations of business contracts. The DbC approach assumes all client components that invoke an operation on a server component will meet the preconditions specified as required for that operation. Where this assumption is considered too risky (as in multi-channel or distributed computing), the inverse approach is taken, meaning that the server component tests that all relevant preconditions hold true (before, or while, processing the client component's request) and replies with a suitable error message if not. == History == The term was coined by Bertrand Meyer in connection with his design of the Eiffel programming language and first described in various articles starting in 1986 and the two successive editions (1988, 1997) of his book Object-Oriented Software Construction. Eiffel Software applied for trademark registration for Design by Contract in December 2003, and it was granted in December 2004. The current owner of this trademark is Eiffel Software. Design by contract has its roots in work on formal verification, formal specification and Hoare logic. The original contributions include: A clear metaphor to guide the design process The application to inheritance, in particular a formalism for redefinition and dynamic binding The application to exception handling The connection with automatic software documentation == Description == The central idea of DbC is a metaphor on how elements of a software system collaborate with each other on the basis of mutual obligations and benefits. The metaphor comes from business life, where a "client" and a "supplier" agree on a "contract" that defines, for example, that: The supplier must provide a certain product (obligation) and is entitled to expect that the client has paid its fee (benefit). The client must pay the fee (obligation) and is entitled to get the product (benefit). Both parties must satisfy certain obligations, such as laws and regulations, applying to all contracts. Similarly, if the method of a class in object-oriented programming provides a certain functionality, it may: Expect a certain condition to be guaranteed on entry by any client module that calls it: the method's precondition—an obligation for the client, and a benefit for the supplier (the method itself), as it frees it from having to handle cases outside of the precondition. Guarantee a certain property on exit: the method's postcondition—an obligation for the supplier, and obviously a benefit (the main benefit of calling the method) for the client. Maintain a certain property, assumed on entry and guaranteed on exit: the class invariant. The contract is semantically equivalent to a Hoare triple which formalises the obligations. This can be summarised by the "three questions" that the designer must repeatedly answer in the contract: What does the contract expect? What does the contract guarantee? What does the contract maintain? Many programming languages have facilities to make assertions like these. However, DbC considers these contracts to be so crucial to software correctness that they should be part of the design process. In effect, DbC advocates writing the assertions first. Contracts can be written by code comments, enforced by a test suite, or both, even if there is no special language support for contracts. The notion of a contract extends down to the method/procedure level; the contract for each method will normally contain the following pieces of information: Acceptable and unacceptable input values or types, and their meanings Return values or types, and their meanings Error and exception condition values or types that can occur, and their meanings Side effects Preconditions Postconditions Invariants (more rarely) Performance guarantees, e.g. for time or space used Subclasses in an inheritance hierarchy are allowed to weaken preconditions (but not strengthen them) and strengthen postconditions and invariants (but not weaken them). These rules approximate behavioural subtyping. All class relationships are between client classes and supplier classes. A client class is obliged to make calls to supplier features where the resulting state of the supplier is not violated by the client call. Subsequently, the supplier is obliged to provide a return state and data that does not violate the state requirements of the client. For instance, a supplier data buffer may require that data is present in the buffer when a delete feature is called. Subsequently, the supplier guarantees to the client that when a delete feature finishes its work, the data item will, indeed, be deleted from the buffer. Other design contracts are concepts of class invariant. The class invariant guarantees (for the local class) that the state of the class will be maintained within specified tolerances at the end of each feature execution. When using contracts, a supplier should not try to verify that the contract conditions are satisfied—a practice known as offensive programming—the general idea being that code should "fail hard", with contract verification being the safety net. DbC's "fail hard" property simplifies the debugging of contract behavior, as the intended behaviour of each method is clearly specified. This approach differs substantially from that of defensive programming, where the supplier is responsible for figuring out what to do when a precondition is broken. More often than not, the supplier throws an exception to inform the client that the precondition has been broken, and in both cases—DbC and defensive programming alike—the client must figure out how to respond to that. In such cases, DbC makes the supplier's job easier. Design by contract also defines criteria for correctness for a software module: If the class invariant AND precondition are true before a supplier is called by a client, then the invariant AND the postcondition will be true after the service has been completed. When making calls to a supplier, a software module should not violate the supplier's preconditions. Design by contract can also facilitate code reuse, since the contract for each piece of code is fully documented. The contracts for a module can be regarded as a form of software documentation for the behavior of that module. == Performance implications == Contract conditions should never be violated during execution of a bug-free program. Contracts are therefore typically only checked in debug mode during software development. Later at release, the contract checks are disabled to maximize performance. In many programming languages, contracts are implemented with assert. Asserts are by default compiled away in release mode in C/C++, and similarly deactivated in C# and Java. Launching the Python interpreter with "-O" (for "optimize") as an argument will likewise cause the Python code generator to not emit any bytecode for asserts. This effectively eliminates the run-time costs of asserts in production code—irrespective of the number and computational expense of asserts used in development—as no such instructions will be included in production by the compiler. == Relationship to software testing == Design by contract does not replace regular testing strategies, such as unit testing, integration testing and system testing. Rather, it complements external testing with internal self-tests that can be activated both for isolated tests and in production code during a test-phase. The advantage of internal self-tests is that they can detect errors before they manifest themselves as invalid results observed by the client. This leads to earlier and more specific error detection. The use of assertions can be considered to be a form of test oracle, a way of testing the design by contract implementation. == Language support == === Languages with native support === Languages that implement most DbC features natively include: Ada 2012 Ciao Clojure Cobra D C++26 Dafny Eiffel Fortress Kotlin Mercury Oxygene (formerly Chrome and Delphi Prism) Racket (including higher order contracts, and emphasizing that contract violations must blame the guilty party and must do so with an accurate explanation) Sather Scala SPARK (via static analysis of Ada programs) Vala VDM Additionally, the standard method combination in the Common Lisp Object System has the method qualifiers :before, :after and :around that allow writing contracts as auxiliary methods, among other uses. == See also == Component-based software engineering Correctness (computer science) Defensive programming Fail-fast system Formal methods Hoare logic Modular programming Program derivation Program refinement Strong typing Test-driven development Typestate analysis == Notes == == Bibliography == == External links == The Power of Design by Contract(TM) A top-level description of DbC, with links to additional resources. Building bug-free O-O software: An introduction to Design by Contract(TM) Older material on DbC. Benefits and drawbacks; implementation in RPS-Obix Using Code Contracts for Safer Code
https://en.wikipedia.org/wiki/Design_by_contract
A domain-specific language (DSL) is a computer language specialized to a particular application domain. This is in contrast to a general-purpose language (GPL), which is broadly applicable across domains. There are a wide variety of DSLs, ranging from widely used languages for common domains, such as HTML for web pages, down to languages used by only one or a few pieces of software, such as MUSH soft code. DSLs can be further subdivided by the kind of language, and include domain-specific markup languages, domain-specific modeling languages (more generally, specification languages), and domain-specific programming languages. Special-purpose computer languages have always existed in the computer age, but the term "domain-specific language" has become more popular due to the rise of domain-specific modeling. Simpler DSLs, particularly ones used by a single application, are sometimes informally called mini-languages. The line between general-purpose languages and domain-specific languages is not always sharp, as a language may have specialized features for a particular domain but be applicable more broadly, or conversely may in principle be capable of broad application but in practice used primarily for a specific domain. For example, Perl was originally developed as a text-processing and glue language, for the same domain as AWK and shell scripts, but was mostly used as a general-purpose programming language later on. By contrast, PostScript is a Turing-complete language, and in principle can be used for any task, but in practice is narrowly used as a page description language. == Use == The design and use of appropriate DSLs is a key part of domain engineering, by using a language suitable to the domain at hand – this may consist of using an existing DSL or GPL, or developing a new DSL. Language-oriented programming considers the creation of special-purpose languages for expressing problems as standard part of the problem-solving process. Creating a domain-specific language (with software to support it), rather than reusing an existing language, can be worthwhile if the language allows a particular type of problem or solution to be expressed more clearly than an existing language would allow and the type of problem in question reappears sufficiently often. Pragmatically, a DSL may be specialized to a particular problem domain, a particular problem representation technique, a particular solution technique, or other aspects of a domain. == Overview == A domain-specific language is created specifically to solve problems in a particular domain and is not intended to be able to solve problems outside of it (although that may be technically possible). In contrast, general-purpose languages are created to solve problems in many domains. The domain can also be a business area. Some examples of business areas include: life insurance policies (developed internally by a large insurance enterprise) combat simulation salary calculation billing A domain-specific language is somewhere between a tiny programming language and a scripting language, and is often used in a way analogous to a programming library. The boundaries between these concepts are quite blurry, much like the boundary between scripting languages and general-purpose languages. === In design and implementation === Domain-specific languages are languages (or often, declared syntaxes or grammars) with very specific goals in design and implementation. A domain-specific language can be one of a visual diagramming language, such as those created by the Generic Eclipse Modeling System, programmatic abstractions, such as the Eclipse Modeling Framework, or textual languages. For instance, the command line utility grep has a regular expression syntax which matches patterns in lines of text. The sed utility defines a syntax for matching and replacing regular expressions. Often, these tiny languages can be used together inside a shell to perform more complex programming tasks. The line between domain-specific languages and scripting languages is somewhat blurred, but domain-specific languages often lack low-level functions for filesystem access, interprocess control, and other functions that characterize full-featured programming languages, scripting or otherwise. Many domain-specific languages do not compile to byte-code or executable code, but to various kinds of media objects: GraphViz exports to PostScript, GIF, JPEG, etc., where Csound compiles to audio files, and a ray-tracing domain-specific language like POV compiles to graphics files. === Data definition languages === A data definition language like SQL presents an interesting case: it can be deemed a domain-specific language because it is specific to a specific domain (in SQL's case, accessing and managing relational databases), and is often called from another application, but SQL has more keywords and functions than many scripting languages, and is often thought of as a language in its own right, perhaps because of the prevalence of database manipulation in programming and the amount of mastery required to be an expert in the language. Further blurring this line, many domain-specific languages have exposed APIs, and can be accessed from other programming languages without breaking the flow of execution or calling a separate process, and can thus operate as programming libraries. === Programming tools === Some domain-specific languages expand over time to include full-featured programming tools, which further complicates the question of whether a language is domain-specific or not. A good example is the functional language XSLT, specifically designed for transforming one XML graph into another, which has been extended since its inception to allow (particularly in its 2.0 version) for various forms of filesystem interaction, string and date manipulation, and data typing. In model-driven engineering, many examples of domain-specific languages may be found like OCL, a language for decorating models with assertions or QVT, a domain-specific transformation language. However, languages like UML are typically general-purpose modeling languages. To summarize, an analogy might be useful: a Very Little Language is like a knife, which can be used in thousands of different ways, from cutting food to cutting down trees. A domain-specific language is like an electric drill: it is a powerful tool with a wide variety of uses, but a specific context, namely, putting holes in things. A General Purpose Language is a complete workbench, with a variety of tools intended for performing a variety of tasks. Domain-specific languages should be used by programmers who, looking at their current workbench, realize they need a better drill and find that a particular domain-specific language provides exactly that. == Domain-specific language topics == === External and Embedded Domain Specific Languages === DSLs implemented via an independent interpreter or compiler are known as External Domain Specific Languages. Well known examples include TeX or AWK. A separate category known as Embedded (or Internal) Domain Specific Languages are typically implemented within a host language as a library and tend to be limited to the syntax of the host language, though this depends on host language capabilities. === Usage patterns === There are several usage patterns for domain-specific languages: Processing with standalone tools, invoked via direct user operation, often on the command line or from a Makefile (e.g., grep for regular expression matching, sed, lex, yacc, the GraphViz toolset, etc.) Domain-specific languages which are implemented using programming language macro systems, and which are converted or expanded into a host general purpose language at compile-time or realtime As embedded domain-specific language (eDSL) also known as an internal domain-specific language, is a DSL that is implemented as a library in a "host" programming language. The embedded domain-specific language leverages the syntax, semantics and runtime environment (sequencing, conditionals, iteration, functions, etc.) and adds domain-specific primitives that allow programmers to use the "host" programming language to create programs that generate code in the "target" programming language. Multiple eDSLs can easily be combined into a single program and the facilities of the host language can be used to extend an existing eDSL. Other possible advantages using an eDSL are improved type safety and better IDE tooling. eDSL examples: SQLAlchemy "Core" an SQL eDSL in Python, jOOQ an SQL eDSL in Java, LINQ's "method syntax" an SQL eDSL in C# and kotlinx.html an HTML eDSL in Kotlin. Domain-specific languages which are called (at runtime) from programs written in general purpose languages like C or Perl, to perform a specific function, often returning the results of operation to the "host" programming language for further processing; generally, an interpreter or virtual machine for the domain-specific language is embedded into the host application (e.g. format strings, a regular expression engine) Domain-specific languages which are embedded into user applications (e.g., macro languages within spreadsheets) and which are (1) used to execute code that is written by users of the application, (2) dynamically generated by the application, or (3) both. Many domain-specific languages can be used in more than one way. DSL code embedded in a host language may have special syntax support, such as regexes in sed, AWK, Perl or JavaScript, or may be passed as strings. === Design goals === Adopting a domain-specific language approach to software engineering involves both risks and opportunities. The well-designed domain-specific language manages to find the proper balance between these. Domain-specific languages have important design goals that contrast with those of general-purpose languages: Domain-specific languages are less comprehensive. Domain-specific languages are much more expressive in their domain. Domain-specific languages should exhibit minimal redundancy. === Idioms === In programming, idioms are methods imposed by programmers to handle common development tasks, e.g.: Ensure data is saved before the window is closed. Edit code whenever command-line parameters change because they affect program behavior. General purpose programming languages rarely support such idioms, but domain-specific languages can describe them, e.g.: A script can automatically save data. A domain-specific language can parameterize command line input. == Examples == Examples of domain-specific programming languages include HTML, Logo for pencil-like drawing, Verilog and VHDL hardware description languages, MATLAB and GNU Octave for matrix programming, Mathematica, Maple and Maxima for symbolic mathematics, Specification and Description Language for reactive and distributed systems, spreadsheet formulas and macros, SQL for relational database queries, YACC grammars for creating parsers, regular expressions for specifying lexers, the Generic Eclipse Modeling System for creating diagramming languages, Csound for sound and music synthesis, and the input languages of GraphViz and GrGen, software packages used for graph layout and graph rewriting, Hashicorp Configuration Language used for Terraform and other Hashicorp tools, Puppet also has its own configuration language. === GameMaker Language === The GML scripting language used by GameMaker Studio is a domain-specific language targeted at novice programmers to easily be able to learn programming. While the language serves as a blend of multiple languages including Delphi, C++, and BASIC. Most of functions in that language after compiling in fact calls runtime functions written in language specific for targeted platform, so their final implementation is not visible to user. The language primarily serves to make it easy for anyone to pick up the language and develop a game, and thanks to GM runtime which handles main game loop and keeps implementation of called functions, few lines of code is required for simplest game, instead of thousands. === ColdFusion Markup Language === ColdFusion's associated scripting language is another example of a domain-specific language for data-driven websites. This scripting language is used to weave together languages and services such as Java, .NET, C++, SMS, email, email servers, http, ftp, exchange, directory services, and file systems for use in websites. The ColdFusion Markup Language (CFML) includes a set of tags that can be used in ColdFusion pages to interact with data sources, manipulate data, and display output. CFML tag syntax is similar to HTML element syntax. === FilterMeister === FilterMeister is a programming environment, with a programming language that is based on C, for the specific purpose of creating Photoshop-compatible image processing filter plug-ins; FilterMeister runs as a Photoshop plug-in itself and it can load and execute scripts or compile and export them as independent plug-ins. Although the FilterMeister language reproduces a significant portion of the C language and function library, it contains only those features which can be used within the context of Photoshop plug-ins and adds a number of specific features only useful in this specific domain. === MediaWiki templates === The Template feature of MediaWiki is an embedded domain-specific language whose fundamental purpose is to support the creation of page templates and the transclusion (inclusion by reference) of MediaWiki pages into other MediaWiki pages. === Software engineering uses === There has been much interest in domain-specific languages to improve the productivity and quality of software engineering. Domain-specific language could possibly provide a robust set of tools for efficient software engineering. Such tools are beginning to make their way into the development of critical software systems. The Software Cost Reduction Toolkit is an example of this. The toolkit is a suite of utilities including a specification editor to create a requirements specification, a dependency graph browser to display variable dependencies, a consistency checker to catch missing cases in well-formed formulas in the specification, a model checker and a theorem prover to check program properties against the specification, and an invariant generator that automatically constructs invariants based on the requirements. A newer development is language-oriented programming, an integrated software engineering methodology based mainly on creating, optimizing, and using domain-specific languages. === Metacompilers === Complementing language-oriented programming, as well as all other forms of domain-specific languages, are the class of compiler writing tools called metacompilers. A metacompiler is not only useful for generating parsers and code generators for domain-specific languages, but a metacompiler itself compiles a domain-specific metalanguage specifically designed for the domain of metaprogramming. Besides parsing domain-specific languages, metacompilers are useful for generating a wide range of software engineering and analysis tools. The meta-compiler methodology is often found in program transformation systems. Metacompilers that played a significant role in both computer science and the computer industry include Meta-II, and its descendant TreeMeta. === Unreal Engine before version 4 and other games === Unreal and Unreal Tournament unveiled a language called UnrealScript. This allowed for rapid development of modifications compared to the competitor Quake (using the Id Tech 2 engine). The Id Tech engine used standard C code meaning C had to be learned and properly applied, while UnrealScript was optimized for ease of use and efficiency. Similarly, more recent games have introduced their own specific languages for development. One more common example is Lua for scripting. === Rules engines for policy automation === Various business rules engines have been developed for automating policy and business rules used in both government and private industry. ILOG, Oracle Policy Automation, DTRules, Drools and others provide support for DSLs aimed to support various problem domains. DTRules goes so far as to define an interface for the use of multiple DSLs within a rule set. The purpose of business rules engines is to define a representation of business logic in as human-readable fashion as possible. This allows both subject-matter experts and developers to work with and understand the same representation of the business logic. Most rules engines provide both an approach to simplifying the control structures for business logic (for example, using declarative rules or decision tables) coupled with alternatives to programming syntax in favor of DSLs. === Statistical modelling languages === Statistical modelers have developed domain-specific languages such as R (an implementation of the S language), Bugs, Jags, and Stan. These languages provide a syntax for describing a Bayesian model and generate a method for solving it using simulation. === Generate model and services to multiple programming Languages === Generate object handling and services based on an Interface Description Language for a domain-specific language such as JavaScript for web applications, HTML for documentation, C++ for high-performance code, etc. This is done by cross-language frameworks such as Apache Thrift or Google Protocol Buffers. === Gherkin === Gherkin is a language designed to define test cases to check the behavior of software, without specifying how that behavior is implemented. It is meant to be read and used by non-technical users using a natural language syntax and a line-oriented design. The tests defined with Gherkin must then be implemented in a general programming language. Then, the steps in a Gherkin program acts as a syntax for method invocation accessible to non-developers. === Other examples === Other prominent examples of domain-specific languages include: Game Description Language OpenGL Shading Language Gradle ActionScript == Advantages and disadvantages == Some of the advantages: Domain-specific languages allow solutions to be expressed in the idiom and at the level of abstraction of the problem domain. The idea is that domain experts themselves may understand, validate, modify, and often even develop domain-specific language programs. However, this is seldom the case. Domain-specific languages allow validation at the domain level. As long as the language constructs are safe any sentence written with them can be considered safe. Domain-specific languages can help to shift the development of business information systems from traditional software developers to the typically larger group of domain-experts who (despite having less technical expertise) have a deeper knowledge of the domain. Domain-specific languages are easier to learn, given their limited scope. Some of the disadvantages: Cost of learning a new language Limited applicability Cost of designing, implementing, and maintaining a domain-specific language as well as the tools required to develop with it (IDE) Finding, setting, and maintaining proper scope. Difficulty of balancing trade-offs between domain-specificity and general-purpose programming language constructs. Potential loss of processor efficiency compared with hand-coded software. Proliferation of similar non-standard domain-specific languages, for example, a DSL used within one insurance company versus a DSL used within another insurance company. Non-technical domain experts can find it hard to write or modify DSL programs by themselves. Increased difficulty of integrating the DSL with other components of the IT system (as compared to integrating with a general-purpose language). Low supply of experts in a particular DSL tends to raise labor costs. Harder to find code examples. == Tools for designing domain-specific languages == JetBrains MPS is a tool for designing domain-specific languages. It uses projectional editing which allows overcoming the limits of language parsers and building DSL editors, such as ones with tables and diagrams. It implements language-oriented programming. MPS combines an environment for language definition, a language workbench, and an Integrated Development Environment (IDE) for such languages. MontiCore is a language workbench for the efficient development of domain-specific languages. It processes an extended grammar format that defines the DSL and generates Java components for processing the DSL documents. Xtext is an open-source software framework for developing programming languages and domain-specific languages (DSLs). Unlike standard parser generators, Xtext generates not only a parser but also a class model for the abstract syntax tree. In addition, it provides a fully featured, customizable Eclipse-based IDE. The project was archived in April 2023. Racket is a cross-platform language toolchain including native code, JIT and JavaScript compiler, IDE (in addition to supporting Emacs, Vim, VSCode and others) and command line tools designed to accommodate creating both domain-specific and general purpose languages. == See also == Language workbench Architecture description language Domain-specific entertainment language Language for specific purposes Jargon Metalinguistic abstraction Programming domain == References == == Further reading == Mernik, Marjan; Heering, Jan & Sloane, Anthony M. (2005). "When and how to develop domain-specific languages". ACM Computing Surveys. 37 (4): 316–344. doi:10.1145/1118890.1118892. S2CID 207158373. Spinellis, Diomidis (2001). "Notable design patterns for domain specific languages". Journal of Systems and Software. 56 (1): 91–99. doi:10.1016/S0164-1212(00)00089-3. Parr, Terence (2007). The Definitive ANTLR Reference: Building Domain-Specific Languages. Pragmatic Bookshelf. ISBN 978-0-9787392-5-6. Larus, James (2009). "Spending Moore's Dividend". Communications of the ACM. 52 (5): 62–69. doi:10.1145/1506409.1506425. ISSN 0001-0782. S2CID 2803479. Werner Schuster (June 15, 2007). "What's a Ruby DSL and what isn't?". C4Media. Retrieved 2009-09-08. Fowler, Martin (2011). Domain-Specific Languages. Addison-Wesley. ISBN 978-0-321-71294-3. == External links == "Minilanguages", The Art of Unix Programming, by Eric S. Raymond Martin Fowler on domain-specific languages and Language Workbenches. Also in a video presentation Domain-Specific Languages: An Annotated Bibliography Archived 2016-03-16 at the Wayback Machine One Day Compilers: Building a small domain-specific language using OCaml Usenix Association: Conference on Domain-Specific Languages (DSL '97) and 2nd Conference on Domain-Specific Languages (DSL '99) Internal Domain-Specific Languages The complete guide to (external) Domain Specific Languages jEQN Archived 2021-01-31 at the Wayback Machine example of internal Domain-Specific Language for the Modeling and Simulation of Extended Queueing Networks. Articles External DSLs with Eclipse technology "Building Domain-Specific Languages over a Language Framework". 1997. CiteSeerX 10.1.1.50.4685. {{cite journal}}: Cite journal requires |journal= (help) Using Acceleo with GMF : Generating presentations from a MindMap DSL modeler Archived 2016-07-30 at the Wayback Machine UML vs. Domain-Specific Languages Sagar Sen; et al. (2009). "Meta-model Pruning". CiteSeerX 10.1.1.156.6008. {{cite journal}}: Cite journal requires |journal= (help)
https://en.wikipedia.org/wiki/Domain-specific_language
In computer science, a rule-based system is a computer system in which domain-specific knowledge is represented in the form of rules and general-purpose reasoning is used to solve problems in the domain. Two different kinds of rule-based systems emerged within the field of artificial intelligence in the 1970s: Production systems, which use if-then rules to derive actions from conditions. Logic programming systems, which use conclusion if conditions rules to derive conclusions from conditions. The differences and relationships between these two kinds of rule-based system has been a major source of misunderstanding and confusion. Both kinds of rule-based systems use either forward or backward chaining, in contrast with imperative programs, which execute commands listed sequentially. However, logic programming systems have a logical interpretation, whereas production systems do not. == Production system rules == A classic example of a production rule-based system is the domain-specific expert system that uses rules to make deductions or choices. For example, an expert system might help a doctor choose the correct diagnosis based on a cluster of symptoms, or select tactical moves to play a game. Rule-based systems can be used to perform lexical analysis to compile or interpret computer programs, or in natural language processing. Rule-based programming attempts to derive execution instructions from a starting set of data and rules. This is a more indirect method than that employed by an imperative programming language, which lists execution steps sequentially. === Construction === A typical rule-based system has four basic components: A list of rules or rule base, which is a specific type of knowledge base. An inference engine or semantic reasoner, which infers information or takes action based on the interaction of input and the rule base. The interpreter executes a production system program by performing the following match-resolve-act cycle: Match: In this first phase, the condition sides of all productions are matched against the contents of working memory. As a result a set (the conflict set) is obtained, which consists of instantiations of all satisfied productions. An instantiation of a production is an ordered list of working memory elements that satisfies the condition side of the production. Conflict-resolution: In this second phase, one of the production instantiations in the conflict set is chosen for execution. If no productions are satisfied, the interpreter halts. Act: In this third phase, the actions of the production selected in the conflict-resolution phase are executed. These actions may change the contents of working memory. At the end of this phase, execution returns to the first phase. Temporary working memory, which is a database of facts. A user interface or other connection to the outside world through which input and output signals are received and sent. Whereas the matching phase of the inference engine has a logical interpretation, the conflict resolution and action phases do not. Instead, "their semantics is usually described as a series of applications of various state-changing operators, which often gets quite involved (depending on the choices made in deciding which ECA rules fire, when, and so forth), and they can hardly be regarded as declarative". == Logic programming rules == The logic programming family of computer systems includes the programming language Prolog, the database language Datalog and the knowledge representation and problem-solving language Answer Set Programming (ASP). In all of these languages, rules are written in the form of clauses: A :- B1, ..., Bn. and are read as declarative sentences in logical form: A if B1 and ... and Bn. In the simplest case of Horn clauses (or "definite" clauses), which are a subset of first-order logic, all of the A, B1, ..., Bn are atomic formulae. Although Horn clause logic programs are Turing complete, for many practical applications, it is useful to extend Horn clause programs by allowing negative conditions, implemented by negation as failure. Such extended logic programs have the knowledge representation capabilities of a non-monotonic logic. == Differences and relationships between production rules and logic programming rules == The most obvious difference between the two kinds of systems is that production rules are typically written in the forward direction, if A then B, and logic programming rules are typically written in the backward direction, B if A. In the case of logic programming rules, this difference is superficial and purely syntactic. It does not affect the semantics of the rules. Nor does it affect whether the rules are used to reason backwards, Prolog style, to reduce the goal B to the subgoals A, or whether they are used, Datalog style, to derive B from A. In the case of production rules, the forward direction of the syntax reflects the stimulus-response character of most production rules, with the stimulus A coming before the response B. Moreover, even in cases when the response is simply to draw a conclusion B from an assumption A, as in modus ponens, the match-resolve-act cycle is restricted to reasoning forwards from A to B. Reasoning backwards in a production system would require the use of an entirely different kind of inference engine. In his Introduction to Cognitive Science, Paul Thagard includes logic and rules as alternative approaches to modelling human thinking. He does not consider logic programs in general, but he considers Prolog to be, not a rule-based system, but "a programming language that uses logic representations and deductive techniques" (page 40). He argues that rules, which have the form IF condition THEN action, are "very similar" to logical conditionals, but they are simpler and have greater psychological plausibility (page 51). Among other differences between logic and rules, he argues that logic uses deduction, but rules use search (page 45) and can be used to reason either forward or backward (page 47). Sentences in logic "have to be interpreted as universally true", but rules can be defaults, which admit exceptions (page 44). He does not observe that all of these features of rules apply to logic programming systems. == See also == Logic programming Expert systems Rewriting RuleML List of rule-based languages Learning classifier system Rule-based machine learning Rule-based modeling == References ==
https://en.wikipedia.org/wiki/Rule-based_system
An educational programming language (EPL) is a programming language used primarily as a learning tool, and a starting point before transitioning to more complex programming languages. == Types of educational programming languages == === Assembly languages === Initially, machine code was the sole method of programming computers. Assembly language (ASM), introduced mnemonics to replace low-level instructions, making it one of the oldest programming languages still used today. Numerous dialects and implementations exist, each tailored to a specific computer processor architecture. Assembly languages are low-level and more challenging to use, as they are untyped and rigid. For educational purposes, simplified dialects of assembly languages have been developed to make coding more accessible to beginners. Assembly languages are designed for specific processor architectures, and they must be written with the corresponding hardware in mind. Unlike higher-level languages, educational assembly languages require a representation of a processor which can be virtual or physical. These languages are often used in educational settings to demonstrate the fundamental operations of a computer processor. Little Man Computer (LMC), (1965) is an instructional model of a simple von Neumann architecture computer. It includes the basic features of modern computers and can be programmed using machine code (usually in decimal) or assembly. The model simulates a computer environment using a visual metaphor of a person (the "Little Man") in a room with 100 mailboxes (memory), a calculator (the accumulator) and a program counter. LMC is used to help students understand basic processor functions and memory management. MIX (1968) and MMIX (1999) are computer models featured in Donald Knuth's (Art of Computer Programming). The MIX computer is designed for educational purposes, illustrating how a basic machine language operates. Despite its simplicity, it can handle complex tasks typical of high-performance computers. MIX allows programming in both binary and decimal, with software emulators available for both models. MMIX, which superseded MIX, is a 64-bit RISC instruction set architecture, modernized for teaching contemporary computer architecture. DLX (1994) is a reduced instruction set computer (RISC) processor architecture created by key developers of the MIPS and Berkeley RISC designs. DLX is a simplified version of MIPS, offering a 32-bit load/store architecture commonly used in college-level computer architecture courses. Next Byte Codes (NBC), (2007) is a simple assembly language used for programming Lego Mindstorms NXT programmable bricks. The NBC compiler produces NXT-compatible machine code and is supported on Windows, macOS and Linux. Little Computer 3 (LC-3), (2019) is an assembly language with a simplified instruction set, enabling the writing of moderately complex assembly programs. It includes many features found in more advanced languages, making it useful for teaching basic programming and computer architecture. It is primarily used in introductory computer science and engineering courses. === BASIC variants === BASIC (Beginner's All-purpose Symbolic Instruction Code) was invented in 1964, to provide computer access to non-science students. It became popular on minicomputers during the 1960s and became a standard computing language for microcomputers during the late 1970s and early 1980s. The goals of BASIC were focused on the need to learn to program easily and they are to: Be easy for beginners to use. Be interactive. Provide clear and friendly error messages. Respond quickly. Not require an understanding of computer hardware or operating systems. What made BASIC attractive for education was the small size of programs that could illustrate a concept in a dozen lines. BASIC continues to be frequently self-taught with tutorials and implementations. See also: List of BASIC dialects by platform BASIC offers a learning path from learning-oriented BASICs such as Microsoft Small Basic, BASIC-256 SIMPLE and to more full-featured BASICs like Visual Basic, NET and Gambas. Microsoft Small Basic is a restricted version of Visual Basic, which is designed as "an introductory programming language for beginners". It's intentionally minimal with just 15 keywords for basic functionality. By providing specific libraries for topics that interest children, they can create programs for both the web and desktop environments. For example, with 6 lines of code, it is possible to demonstrate a random network image viewer using Flickr as the source. The system utilizes the Microsoft Visual Studio IDE to provide auto-completion and context-sensitive help. Basic-256 is an easy-to-use version of BASIC designed to teach anybody the basics of computer programming. It uses traditional BASIC control structures (gosub, for loops, goto) for easy understanding of program flow control. It has a built-in graphics mode that allows children to draw pictures on the screen after minutes. SiMPLE is a programming development system that was created to provide easy programming abilities for everybody, especially non-professionals. It is somewhat like AppleSoft BASIC. It is compiled and lets users make their own libraries of often-used functions. "Simple" is a generic term for three slightly different versions of the language: Micro-SIMPLE (uses only 4 keywords), Pro-SiMPLE, and Ultra-SiMPLE (using 23 keywords). Hot Soup Processor is a BASIC-derived language used in Japanese schools. TI-BASIC is a simple BASIC-like language implemented in Texas Instruments graphing calculators, often serving as a student's first look at programming. Small BASIC is a fast and easy-to-learn BASIC language interpreter ideal for everyday calculations, scripts and prototypes. It includes trigonometric, matrix and algebra functions, a built-in IDE, a powerful string library, system, sound and graphic commands, and a structured programming syntax. === C-based === Ch is a C/C++ interpreter designed to help non-CS students learn math, computing and programming in C and C++. It extends C with numerical, 2D/3D graphical plotting and scripting features. === Java-based === Written in Java and Scala - a development environment for building and exploring scientific models, specifically agent-based models. === Lisp-based === Lisp is the second oldest family of programming languages in use today and as such has many dialects and implementations with a wide range of difficulties. Lisp was originally created as a practical mathematical notation for computer programs, based on lambda calculus, which makes it particularly well suited for teaching theories of computing. As one of the earliest languages, Lisp pioneered many ideas in computer science, including tree data structures, automatic storage management, dynamic typing, object-oriented programming and the self-hosting compiler, all of which are useful for learning computer science. The name LISP derives from "List Processing language." Linked lists are one of the languages major data structures and Lisp source code is made of lists. Thus, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific languages embedded in Lisp. Therefore, Lisp can be useful for learning language design. === Logo-based === Logo is a language that was specifically designed to introduce children to programming. The first part of learning Logo deals with "turtle graphics" (derived from turtle robots) used as early as 1969. In modern implementations, an abstract drawing device, called the turtle, is used to make programming for children very attractive by concentrating on doing turtle graphics. Seymour Papert, one of the creators of Logo, was a prominent figure in constructionism, a variety of constructivist learning theories. Papert argued that activities like writing would naturally be learned by much younger children provided that they adopt a computing culture. Logo was designed to introduce children to programming through visual aids and concepts in a technology-focused curriculum. "More important than having an early start on intellectual building is being saved from a long period of dependency during which one learns to think of learning as something that has to be dished out by a more powerful other...Such children would not define themselves or allow society to define them as intellectually helpless." It has been used by children as young as 3 years old and has a track record of 30 years of success in education. Since Logo is actually a streamlined version of Lisp with more advanced students, it can be used to introduce the basic concepts of computer science and even artificial intelligence. Logo is available on multiple platforms, offered in both free and commercial versions for educational use. === Scala-based === Kojo is an interactive desktop development environment, developed primarily for educational purposes. The application runs on Windows, Linux and macOS. Kojo is a learning environment, with many different features that help with the exploration, learning and teaching of concepts in computer programming, critical thinking, math, science, art, music, creative thinking, computer and internet literacy. === Smalltalk-based === As part of the One Laptop per Child project, a sequence of Smalltalk-based languages has been developed, each designed to act as an introduction to the next. The structure is Scratch to Etoys to Squeak to any Smalltalk. Each provides graphical environments that may be used to teach not only programming concepts to kids but also physics and mathematics simulations, story-telling exercises, etc., through the use of constructive learning. Smalltalk and Squeak have fully featured application development languages that have been around and well-respected for decades; Scratch is a children's learning tool. Scratch 1.0 is implemented in Smalltalk. See below for more information. Etoys is based on the idea of programmable virtual entities behaving on the computer screen. Etoys provides a media-rich authoring environment with a simple, powerful scripted object model for many kinds of objects created by end-users. It includes 2D and 3D graphics, images, text, particles, presentations, web pages, videos, sound and MIDI (the ability to share desktops with other Etoys users in real-time). Many forms of immersive mentoring and play can be done over the Internet. It is multilingual and has been used successfully in the United States, Europe, South America, Japan, Korea, India, Nepal and elsewhere. The program is aimed at children between the ages of 9-12. Squeak is a modern, open-source, full-featured implementation of the Smalltalk language and environment. Smalltalk is an object-oriented, dynamically typed, reflective programming language created to underpin the "new world" of computing exemplified by "human-computer symbiosis". Like Lisp, it has image-based persistence, so everything is modifiable from within the language (see Smalltalk#Reflection). It has greatly influenced the industry introducing many of the concepts in object-oriented programming and just-in-time compilation. Squeak is the vehicle for a wide range of projects including multimedia applications, educational platforms and commercial web application development. Squeak is designed to be highly portable and easy to debug, analyze and change, as its virtual machine is written fully in Smalltalk. === Pascal === Pascal is an ALGOL-based programming language designed by Niklaus Wirth in approximately 1970 with the goal of teaching structured programming. From the late 1970s to the late 1980s, it was the primary choice in introductory computer science classes for teaching students programming in both the US and Europe. Its use for real-world applications has since increased to general usage. === Other === CircuitPython is a beginner-oriented version of Python for interactive electronics and education. Rapira is an ALGOL-like procedural programming language, with a simple interactive development environment, developed in the Soviet Union to teach programming in schools. Src:Card is a tactile offline programming language embedded in an educational card game. == Children == AgentSheets and AgentCubes are two computational thinking tools to author 2D/3D games and simulations. Authoring takes place through desktop applications or browser-based apps, and it can create 2D/3D games playable in HTML5 compliant browsers, including mobile ones. Alice is a free programming software designed to teach event-driven object-oriented programming (OOP) to children. Programmers create interactive stories using a modern IDE interface with a drag-and-drop style of programming. The target audience ranges from middle school children all the way to university students. Storytelling Alice is a variant of the Alice software designed for younger children, with a greater emphasis on its capabilities in terms of storytelling. Blockly is an open-source web-based graphical language where users can drag blocks together to build an application with no typing required. It was developed by Google. It allows users to convert their Blockly code into other programming languages such as PHP, Python, etc. CiMPLE was a visual language for programming robotic kit designed for children. It was built on top of C as a DSL. ThinkLabs, an Indian Robotics education-based startup, built it for the iPitara Robotics Kit. The language bore strong resemblance to the C language. At least one school in Bangalore, India bought the iPitara kit and had their students program the robots using CiMPLE. More information is available at the CiMPLE Original Developers Weblog. ThinkLabs eventually switched to using "THiNK VPL" as their visual programming software. Physical Etoys is a free open-source extension of Etoys. Its philosophy is that "it helps children explore their own creativity by combining science and art in an infinite laboratory." It can run on Windows, Linux and Sugar. Due to its block scripting system, Physical Etoys allows different electronic devices such as Lego NXT, Arduino boards, Sphero, Kinect, and Wiimote joysticks interact between themselves. Hackety Hack is a free Ruby-based environment that aims to make learning programming easy for beginners, especially teenagers. Karel, Karel++, and Karel J. Robot are languages aimed at beginners, used to control a simple robot in a city consisting of a rectangular grid of streets. While Karel is its own language, Karel++ is a version of Karel implemented in C++, while Karel J. Robot is a version of Karel implemented in Java. Kodu is a language that is simple and entirely icon based. It was developed by Microsoft Research as a project to encourage younger children, especially girls, to enjoy technology. Programs are composed of pages, which are divided into rules, which are further divided into conditions and actions. Conditions are evaluated simultaneously. The Kodu language is designed specifically for game development and provides specialized primitives derived from gaming scenarios. Programs are expressed in physical terms, using concepts like vision, hearing, and time to control characters behavior. The Kodu tool is available in three forms: PC as a free download in public beta and academic forms, and as a low-cost Xbox 360 Live download. Logo is an educational language for children designed in 1967 by Daniel G. Bobrow, Wally Feurzeig, Seymour Papert and Cynthia Solomon. Today, the language is remembered mainly for its use of "turtle graphics," in which commands for movement and drawing produce line graphics using a small robot called a "turtle." The language was originally conceived to teach concepts of programming related to Lisp and only later to enable what Papert called "body-syntonic reasoning" where students could understand (and predict and reason about) the turtle's motion by imagining what they would do if they were the turtle. Lego Mindstorms is a line of Lego sets combining programmable bricks with electric motors, sensors, Lego bricks, and Lego Technic pieces (such as gears, axles, and beams). Mindstorms originated from the programmable sensor blocks used in the line of educational toys. The first retail version of Lego Mindstorms was released in 1998 and marketed commercially as the Robotics Invention System (RIS). The current version was released in 2006 as Lego Mindstorms NXT. A wide range of programming languages is used for the Mindstorms from Logo to BASIC to derivatives of Java, Smalltalk and C. The Lego Mindstorms approach to programming now has dedicated physical sites called Computer Clubhouses. Mama is an educational object oriented language designed to help young students start programming by providing all the language elements in the student's language. Mama language is available in several languages, with both LTR and RTL language direction support. A new variant of Mama was built atop Carnegie Mellon's Alice development environment, supporting scripting of the 3D stage objects. This variant was designed to help young students start programming by building 3D animations and games. A document on educational programming principles explains Mama's design considerations. RoboMind is a simple educational programming environment that allows beginners to program a robot. It introduces popular programming techniques along with robotics and artificial intelligence. The robot can be programmed in Arabic, Chinese, Dutch, German, English and Swedish. Scratch is a visual language with the goal of teaching programming concepts to children by allowing them to create projects such as games, videos, and music. It does this by simplifying code into function "blocks" that can be dragged and connected, then run by clicking the green flag icon. In Scratch, interactive objects, graphics, and sounds can be easily imported to a new program and combined, getting quick results. The Scratch community has developed and uploaded over 1,000,000,000 projects with over 164,000,000 being publicly shared. It is developed by the Lifelong Kindergarten group at MIT Media Lab. ScratchJr is derivative of the Scratch graphical language. It is designed for children with ages around 5-7. Snap! is a free open-source blocks-based graphical language implemented in JavaScript and originally derived from MIT's Scratch. Snap! adds the ability to create new blocks and has first-class functions that enables the use of anonymous functions. It is actively maintained by UC Berkeley. The source is entirely hosted on GitHub. Stagecast Creator is a visual programming system based on programming by demonstration. Users demonstrate to the system what to do by moving icons on the screen, and it generates rules for the objects (characters). Users can create two-dimensional simulations that model concepts, multi-level games, and interactive stories. Stencyl is a visual programming and game development IDE that has been used for education and commerce. The concept of code blocks it implements is based on MIT's Scratch visual language (listed above). It also permits the use of normal typed code (separate or intermingled) through its own API and the Haxe language. ToonTalk is a language and environment that looks like a video game. Computational abstractions are mapped to concrete analogs such as robots, houses, trucks, birds, nests, and boxes. It supports big integers and exact rational numbers. It is based upon concurrent constraint programming. == University == Curry is a teaching language designed to amalgamate the most important declarative programming paradigms, namely functional programming (nested expressions, higher-order functions, lazy evaluation) and logic programming (logical variables, partial data structures, built-in search). It also integrates the two most important operational principles developed in the area of integrated functional logic languages: "residuation" and "narrowing." Flowgorithm is a graphical authoring tool for writing and executing programs via flowcharts. The approach is designed to emphasize the algorithm rather than the syntax of a given language. The flowchart can be converted to several major languages such as C#, Java, Visual Basic .NET and Python. Oz is a language designed to teach computer theory. It supports most major paradigms in one language so that students can learn paradigms without having to learn multiple syntaxes. Oz contains most of the concepts of the major programming paradigms, including logic, functional (both lazy and eager), imperative, object-oriented, constraint, distributed, and concurrent programming. It has a canonical textbook, Concepts, Techniques, and Models of Computer Programming (2004), and a freely available standard implementation, the Mozart Programming System. == See also == Category: Programming language comparisons Assembly language – a low-level programming language Wiki Markup Language Sugar – a GUI designed for constructive learning Design by numbers Processing – a language dedicated to artwork == References ==
https://en.wikipedia.org/wiki/List_of_educational_programming_languages
Programming style, also known as coding style, refers to the conventions and patterns used in writing source code, resulting in a consistent and readable codebase. These conventions often encompass aspects such as indentation, naming conventions, capitalization, and comments. Consistent programming style is generally considered beneficial for code readability and maintainability, particularly in collaborative environments. Maintaining a consistent style across a codebase can improve readability and ease of software maintenance. It allows developers to quickly understand code written by others and reduces the likelihood of errors during modifications. Adhering to standardized coding guidelines ensures that teams follow a uniform approach, making the codebase easier to manage and scale. Many organizations and open-source projects adopt specific coding standards to facilitate collaboration and reduce cognitive load. Style guidelines can be formalized in documents known as coding conventions, which dictate specific formatting and naming rules. These conventions may be prescribed by official standards for a programming language or developed internally within a team or project. For example, Python's PEP 8 is a widely recognized style guide that outlines best practices for writing Python code. In contrast, languages like C or Java may have industry standards that are either formally documented or adhered to by convention. == Automation == Adherence to coding style can be enforced through automated tools, which format code according to predefined guidelines. These tools reduce the manual effort required to maintain style consistency, allowing programmers to focus on logic and functionality. For instance, tools such as Black for Python and clang-format for C++ automatically reformat code to comply with specified coding standards. == Style guidelines == Common elements of coding style include: Indentation and whitespace character use – Ensures consistent block structures and improves readability. Naming conventions – Standardizes how variables, functions, and classes are named, typically adhering to camelCase, snake case, or PascalCase, depending on the language. Capitalization – Dictates whether keywords and identifiers are capitalized or lowercase, in line with language syntax. Comment use – Provides context and explanations within code without affecting its execution. === Indentation === Indentation style can assist a reader in various way including: identifying control flow and blocks of code. In some programming languages, indentation is used to delimit blocks of code and therefore is not matter of style. In languages that ignore whitespace, indentation can affect readability. For example, formatted in a commonly-used style: Arguably, poorly formatted: ==== Notable indenting styles ==== ===== ModuLiq ===== The ModuLiq Zero Indentation Style groups by empty line rather than indenting. Example: ===== Lua ===== Lua does not use the traditional curly braces or parentheses; rather, the expression in a conditional statement must be followed by then, and the block must be closed with end. Indenting is optional in Lua. and, or, and not function as logical operators. ===== Python ===== Python relies on the off-side rule, using indenting to indicate and implement control structure, thus eliminating the need for bracketing (i.e., { and }). However, copying and pasting indented code can cause problems, because the indent level of the pasted code may not be the same as the indent level of the target line. Such reformatting by hand is tedious and error prone, but some text editors and integrated development environments (IDEs) have features to do it automatically. There are also problems when indented code is rendered unusable when posted on a forum or web page that removes whitespace, though this problem can be avoided where it is possible to enclose code in whitespace-preserving tags such as "<pre> ... </pre>" (for HTML), "[code]" ... "[/code]" (for bbcode), etc. Python starts a block with a colon (:). Python programmers tend to follow a commonly agreed style guide known as PEP8. There are tools designed to automate PEP8 compliance. ===== Haskell ===== Haskell, like Python, has the off-side rule. It has a two-dimension syntax where indenting is meaningful to define blocks (although, an alternate syntax uses curly braces and semicolons). Haskell is a declarative language, there are statements, but declarations within a Haskell script. Example: may be written in one line as: Haskell encourages the use of literate programming, where extended text explains the genesis of the code. In literate Haskell scripts (named with the lhs extension), everything is a comment except blocks marked as code. The program can be written in LaTeX, in such case the code environment marks what is code. Also, each active code paragraph can be marked by preceding and ending it with an empty line, and starting each line of code with a greater than sign and a space. Here an example using LaTeX markup: And an example using plain text: === Vertical alignment === Some programmers consider it valuable to align similar elements vertically (as tabular, in columns), citing that it can make typo-generated bugs more obvious. For example, unaligned: aligned: Unlike the unaligned code, the aligned code implies that the search and replace values are related since they have corresponding elements. As there is one more value for search than replacement, if this is a bug, it is more likely to be spotted via visual inspection. Cited disadvantages of vertical alignment include: Dependencies across lines which leads to maintenance load. For example, if a long column value is added that requires a wider column, then all lines of the table must be modified (to maintain the tabular form) which is a larger change which leads to more effort to review and to understand the change at a later date Brittleness: if a programmer does not correctly format the table when making a change, the result is a visual mess that is harder to read than unaligned code. Simple refactoring operations, such as renaming, can break the formatting. More effort to maintain which may discourage a programmer from making a beneficial change, such as improving the name of an identifier, because doing so would require significant formatting effort Requirement to use a fixed-width fonts; not proportional fonts Maintaining alignment can be alleviated by a tool that provides support (i.e. for elastic tabstops), although that creates a reliance on such tools. As an example, simple refactoring operations to rename "$replacement" to "$r" and "$anothervalue" to "$a" results in: With unaligned formatting, these changes do not have such a dramatic, inconsistent or undesirable effect: === Whitespace === A free-format language ignores whitespace characters: spaces, tabs and new lines so the programmer is free to style the code in different ways without affecting the meaning of the code. Generally, the programmer uses style that is considered to enhance readability. The two code snippets below are the same logically, but differ in whitespace. versus The use of tabs for whitespace is debatable. Alignment issues arise due to differing tab stops in different environments and mixed use of tabs and spaces. As an example, one programmer prefers tab stops of four and has their toolset configured this way, and uses these to format their code. Another programmer prefers tab stops of eight, and their toolset is configured this way. When someone else examines the original person's code, they may well find it difficult to read. One widely used solution to this issue may involve forbidding the use of tabs for alignment or rules on how tab stops must be set. Note that tabs work fine provided they are used consistently, restricted to logical indentation, and not used for alignment: == See also == MISRA C – Software development standard for the C programming language Naming convention (programming) – Set of rules for naming entities in source code and documentation == References ==
https://en.wikipedia.org/wiki/Programming_style
The term programming domain is mostly used when referring to domain-specific programming languages. It refers to a set of programming languages or programming environments that were written specifically for a particular domain, where domain means a broad subject for end users such as accounting or finance, or a category of program usage such as artificial intelligence or email. Languages and systems within a single programming domain would have functions common to the domain and may omit functions that are irrelevant to a domain. Some examples of programming domains are: Expert systems, computer systems that emulate the decision-making ability of a human expert and are designed to solve complex problems by reasoning through bodies of knowledge. Natural-language processing, handling interactions between computers and human (natural) languages such as speech recognition, natural-language understanding, and natural-language generation. Computer vision, dealing with how computers can understand and automate tasks that the human visual system can do and extracting data from the real world. Other programming domains would include: Application scripting Array programming Artificial-intelligence reasoning Cloud computing Computational statistics Contact Management Software E-commerce Financial time-series analysis General-purpose applications Image processing Internet Numerical mathematics Programming education Relational database querying Software prototyping Symbolic mathematics Systems design and implementation Text processing Theorem proving Video game programming and development Video processing == See also == Domain (software engineering) Domain-specific language == References == Akour, Mohammed & Falah, Bouchaib. (2016). Application domain and programming language readability yardsticks. 1-6. 10.1109/CSIT.2016.7549476.
https://en.wikipedia.org/wiki/Programming_domain
In computer programming, a trait is a language concept that represents a set of methods that can be used to extend the functionality of a class. == Rationale == In object-oriented programming, behavior is sometimes shared between classes which are not related to each other. For example, many unrelated classes may have methods to serialize objects to JSON. Historically, there have been several approaches to solve this without duplicating the code in every class needing the behavior. Other approaches include multiple inheritance and mixins, but these have drawbacks: the behavior of the code may unexpectedly change if the order in which the mixins are applied is altered, or if new methods are added to the parent classes or mixins. Traits solve these problems by allowing classes to use the trait and get the desired behavior. If a class uses more than one trait, the order in which the traits are used does not matter. The methods provided by the traits have direct access to the data of the class. == Characteristics == Traits combine aspects of protocols (interfaces) and mixins. Like an interface, a trait defines one or more method signatures, of which implementing classes must provide implementations. Like a mixin, a trait provides additional behavior for the implementing class. In case of a naming collision between methods provided by different traits, the programmer must explicitly disambiguate which one of those methods will be used in the class; thus manually solving the diamond problem of multiple inheritance. This is different from other composition methods in object-oriented programming, where conflicting names are automatically resolved by scoping rules. Operations which can be performed with traits include: symmetric sum: an operation that merges two disjoint traits to create a new trait override (or asymmetric sum): an operation that forms a new trait by adding methods to an existing trait, possibly overriding some of its methods alias: an operation that creates a new trait by adding a new name for an existing method exclusion: an operation that forms a new trait by removing a method from an existing trait. (Combining this with the alias operation yields a shallow rename operation). If a method is excluded from a trait, that method must be provided by the class that consumes the trait, or by a parent class of that class. This is because the methods provided by the trait might call the excluded method. Trait composition is commutative (i.e. given traits A and B, A + B is equivalent to B + A) and associative (i.e. given traits A, B, and C, (A + B) + C is equivalent to A + (B + C)). == Limitations == While traits offer significant advantages over many alternatives, they do have their own limitations. === Required methods === If a trait requires the consuming class to provide certain methods, the trait cannot know if those methods are semantically equivalent to the trait's needs. For some dynamic languages, such as Perl, the required method can only be identified by a method name, not a full method signature, making it harder to guarantee that the required method is appropriate. === Excluding methods === If a method is excluded from a trait, that method becomes a 'required' method for the trait because the trait's other methods might call it. == Supported languages == Traits come originally from the programming language Self and are supported by the following programming languages: AmbientTalk: Combines the properties of Self traits (object-based multiple inheritance) and Smalltalk's Squeak traits (requiring explicit composition of traits by the programmer). It builds on the research on stateful and freezable traits to enable state within traits, which was not allowed in the first definitions. C#: Since version 8.0, C# has support for default interface methods, which have some properties of traits. C++: Used in Standard Template Library and the C++ Standard Library to support generic container classes and in the Boost TypeTraits library. Curl: Abstract classes as mixins permit method implementations and thus constitute traits by another name. Fortress Groovy: Since version 2.3 Haskell: In Haskell, Traits are known as Type classes. Haxe: Since version 2.4.0. Called Static Extension in the manual, it uses using keyword Java: Since version 8, Java has support for default methods, which have some properties of traits. JavaScript: Traits can be implemented via functions and delegations or through libraries that provide traits. Julia: Several packages implement traits, e.g., Kotlin: Traits have been called interfaces since M12. Lasso Mojo: Since version 0.6.0 OCaml: Traits can be implemented using a variety of language features: module and module type inclusion, functors and functor types, class and class type inheritance, et cetera. Perl: Called roles, they are implemented in Perl libraries such as Moose, Role::Tiny and Role::Basic. Roles are part of the sister language Raku. With the acceptance of the Corinna OOP Proposal Perl will have roles native to the language as part of a modern OOP system. PHP: Since version 5.4, PHP allows users to specify templates that provide the ability to "inherit" from more than one (trait-)class, as a pseudo multiple inheritance. Python: Via a third-party library, or via higher-order mixin classes Racket: Supports traits as a library and uses macros, structures, and first-class classes to implement them. Ruby: Module mixins can be used to implement traits. Rust Scala trait is builtin supported with the key word trait. Smalltalk: Traits are implemented in two dialects of Smalltalk, Squeak and Pharo. Swift: Traits can be implemented with protocol extensions. == Examples == === C# === On C# 8.0, it is possible to define an implementation as a member of an interface. === PHP === This example uses a trait to enhance other classes: This allows simulating aspects of multiple inheritance: === Rust === A trait in Rust declares a set of methods that a type must implement. Rust compilers require traits to be explicated, which ensures the safety of generics in Rust. To simplify tedious and repeated implementation of traits like Debug and Ord, the derive macro can be used to request compilers to generate certain implementations automatically. Derivable traits include: Clone, Copy, Debug, Default, PartialEq, Eq, PartialOrd, Ord and Hash. == See also == Extension method Interface (object-oriented programming) Parametric polymorphism UFCS == References == == External links == "Traits: Composable Units of Behavior". Software Composition Group. University of Bern.
https://en.wikipedia.org/wiki/Trait_(computer_programming)
SIGNAL is a programming language based on synchronized dataflow (flows + synchronization): a process is a set of equations on elementary flows describing both data and control. The SIGNAL formal model provides the capability to describe systems with several clocks (polychronous systems) as relational specifications. Relations are useful as partial specifications and as specifications of non-deterministic devices (for instance a non-deterministic bus) or external processes (for instance an unsafe car driver). Using SIGNAL allows one to specify an application, to design an architecture, to refine detailed components down to RTOS or hardware description. The SIGNAL model supports a design methodology which goes from specification to implementation, from abstraction to concretization, from synchrony to asynchrony. SIGNAL has been mainly developed in INRIA Espresso team since the 1980s, at the same time as similar programming languages, Esterel and Lustre. == A brief history == The SIGNAL language was first designed for signal processing applications in the beginning of the 1980s. It has been proposed to answer the demand of new domain-specific language for the design of signal processing applications, adopting a dataflow and block-diagram style with array and sliding window operators. P. Le Guernic, A. Benveniste, and T. Gautier have been in charge of the language definition. The first paper on SIGNAL was published in 1982, while the first complete description of SIGNAL appeared in the PhD thesis of T. Gautier. The symbolic representation of SIGNAL via z/3z (over [-1,0,1]) has been introduced in 1986. A full compiler of SIGNAL based on the clock calculus on hierarchy of Boolean clocks, was described by L. Besnard in his PhD thesis in 1992. The clock calculus has been improved later by T. Amagbegnon with the proposition of arborescent canonical forms. During the 1990s, the application domain of the SIGNAL language has been extended into general embedded and real-time systems. The relation-oriented specification style enabled the increasing construction of the systems, and also led to the design considering multi-clocked systems, compared to the original single-clock-based implementation of Esterel and Lustre. Moreover, the design and implementation of distributed embedded systems were also taken into account in SIGNAL. The corresponding research includes the optimization methods proposed by B. Chéron, the clustering models defined by B. Le Goff, the abstraction and separate compilation formalized by O. Maffeïs, and the implementation of distributed programs developed by P. Aubry. == The Polychrony Toolsets == The Polychrony toolset is an open-source development environment for critical/embedded systems based on SIGNAL, a real-time polychronous dataflow language. It provides a unified model-driven environment to perform design exploration by using top-down and bottom-up design methodologies formally supported by design model transformations from specification to implementation and from synchrony to asynchrony. It can be included in heterogeneous design systems with various input formalisms and output languages. Polychrony is a set of tools composed of: A SIGNAL batch compiler A graphical user interface (editor + interactive access to compiling functionalities) The Sigali tool, an associated formal system for formal verification and controller synthesis. Sigali is developed together with the INRIA Vertecs project. == The SME environment == The SME (SIGNAL Meta under Eclipse) environment is a front-end of Polychrony in the Eclipse environment based on Model-Driven Engineering (MDE) technologies. It consists of a set of Eclipse plug-ins which rely on the Eclipse Modeling Framework (EMF). The environment is built around SME, a metamodel of the SIGNAL language extended with mode automata concepts. The SME environment is composed of several plug-ins which correspond to: A reflexive editor: a tree view allowing to manipulate models conform to the SME metamodel. A graphical modeler based on the TopCased modeling facilities (cf. previous picture). A reflexive editor and an Eclipse view to create compilation scenarios. A direct connection to the Polychrony services (compilation, formal verification, etc.). A documentation and model examples. == See also == Synchronous programming language Dataflow programming Globally asynchronous locally synchronous Formal verification Model checking Formal semantics of programming languages AADL Simulink Avionics System design Asynchrony (computer programming) == Notes and references == == External links == The INRIA/IRISA Espresso team The Polychrony toolset dedicated to SIGNAL (official website of Polychrony) backup link Synchrone Lab (the synchronous language Lustre) Esterel (the synchronous Language Esterel)
https://en.wikipedia.org/wiki/SIGNAL_(programming_language)
In computer programming, a macro (short for "macro instruction"; from Greek μακρο- 'long, large') is a rule or pattern that specifies how a certain input should be mapped to a replacement output. Applying a macro to an input is known as macro expansion. The input and output may be a sequence of lexical tokens or characters, or a syntax tree. Character macros are supported in software applications to make it easy to invoke common command sequences. Token and tree macros are supported in some programming languages to enable code reuse or to extend the language, sometimes for domain-specific languages. Macros are used to make a sequence of computing instructions available to the programmer as a single program statement, making the programming task less tedious and less error-prone. Thus, they are called "macros" because a "big" block of code can be expanded from a "small" sequence of characters. Macros often allow positional or keyword parameters that dictate what the conditional assembler program generates and have been used to create entire programs or program suites according to such variables as operating system, platform or other factors. The term derives from "macro instruction", and such expansions were originally used in generating assembly language code. == Keyboard and mouse macros == Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to transform into other, usually more time-consuming, sequences of keystrokes and mouse actions. In this way, frequently used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders. During the 1980s, macro programs – originally SmartKey, then SuperKey, KeyWorks, Prokey – were very popular, first as a means to automatically format screenplays, then for a variety of user-input tasks. These programs were based on the terminate-and-stay-resident mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interfaces and the availability of keyboard and mouse macros in applications, such as word processors and spreadsheets, making it possible to create application-sensitive keyboard macros. Keyboard macros can be used in massively multiplayer online role-playing games (MMORPGs) to perform repetitive, but lucrative tasks, thus accumulating resources. As this is done without human effort, it can skew the economy of the game. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, and their administrators spend considerable effort to suppress them. === Application macros and scripting === Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros. They are created by carrying out the sequence once and letting the application record the actions. An underlying macro programming language, most commonly a scripting language, with direct access to the features of the application may also exist. The programmers' text editor Emacs (short for "editing macros") follows this idea to a conclusion. In effect, most of the editor is made of macros. Emacs was originally devised as a set of macros in the editing language TECO; it was later ported to dialects of Lisp. Another programmers' text editor, Vim (a descendant of vi), also has an implementation of keyboard macros. It can record into a register (macro) what a person types on the keyboard and it can be replayed or edited just like VBA macros for Microsoft Office. Vim also has a scripting language called Vimscript to create macros. Visual Basic for Applications (VBA) is a programming language included in Microsoft Office from Office 97 through Office 2019 (although it was available in some components of Office prior to Office 97). However, its function has evolved from and replaced the macro languages that were originally included in some of these applications. XEDIT, running on the Conversational Monitor System (CMS) component of VM, supports macros written in EXEC, EXEC2 and REXX, and some CMS commands were actually wrappers around XEDIT macros. The Hessling Editor (THE), a partial clone of XEDIT, supports Rexx macros using Regina and Open Object REXX (oorexx). Many common applications, and some on PCs, use Rexx as a scripting language. ==== Macro virus ==== VBA has access to most Microsoft Windows system calls and executes when documents are opened. This makes it relatively easy to write computer viruses in VBA, commonly known as macro viruses. In the mid-to-late 1990s, this became one of the most common types of computer virus. However, during the late 1990s and to date, Microsoft has been patching and updating its programs. In addition, current anti-virus programs immediately counteract such attacks. == Parameterized and parameterless macro == A parameterized macro is a macro that is able to insert given objects into its expansion. This gives the macro some of the power of a function. As a simple example, in the C programming language, this is a typical macro that is not a parameterized macro, i.e., a parameterless macro: #define PI 3.14159 This causes PI to always be replaced with 3.14159 wherever it occurs. An example of a parameterized macro, on the other hand, is this: #define pred(x) ((x)-1) What this macro expands to depends on what argument x is passed to it. Here are some possible expansions: pred(2) → ((2) -1) pred(y+2) → ((y+2) -1) pred(f(5)) → ((f(5))-1) Parameterized macros are a useful source-level mechanism for performing in-line expansion, but in languages such as C where they use simple textual substitution, they have a number of severe disadvantages over other mechanisms for performing in-line expansion, such as inline functions. The parameterized macros used in languages such as Lisp, PL/I and Scheme, on the other hand, are much more powerful, able to make decisions about what code to produce based on their arguments; thus, they can effectively be used to perform run-time code generation. == Text-substitution macros == Languages such as C and some assembly languages have rudimentary macro systems, implemented as preprocessors to the compiler or assembler. C preprocessor macros work by simple textual substitution at the token, rather than the character level. However, the macro facilities of more sophisticated assemblers, e.g., IBM High Level Assembler (HLASM) can't be implemented with a preprocessor; the code for assembling instructions and data is interspersed with the code for assembling macro invocations. A classic use of macros is in the computer typesetting system TeX and its derivatives, where most functionality is based on macros. MacroML is an experimental system that seeks to reconcile static typing and macro systems. Nemerle has typed syntax macros, and one productive way to think of these syntax macros is as a multi-stage computation. Other examples: m4 is a sophisticated stand-alone macro processor. TRAC Macro Extension TAL, accompanying Template Attribute Language SMX: for web pages ML/1 (Macro Language One) troff and nroff: for typesetting and formatting Unix manpages. CMS EXEC: for command-line macros and application macros EXEC 2 in Conversational Monitor System (CMS): for command-line macros and application macros CLIST in IBM's Time Sharing Option (TSO): for command-line macros and application macros REXX: for command-line macros and application macros in, e.g., AmigaOS, CMS, OS/2, TSO SCRIPT: for formatting documents Various shells for, e.g., Linux Some major applications have been written as text macro invoked by other applications, e.g., by XEDIT in CMS. === Embeddable languages === Some languages, such as PHP, can be embedded in free-format text, or the source code of other languages. The mechanism by which the code fragments are recognised (for instance, being bracketed by <?php and ?>) is similar to a textual macro language, but they are much more powerful, fully featured languages. == Procedural macros == Macros in the PL/I language are written in a subset of PL/I itself: the compiler executes "preprocessor statements" at compilation time, and the output of this execution forms part of the code that is compiled. The ability to use a familiar procedural language as the macro language gives power much greater than that of text substitution macros, at the expense of a larger and slower compiler. Macros in PL/I, as well as in many assemblers, may have side effects, e.g., setting variables that other macros can access. Frame technology's frame macros have their own command syntax but can also contain text in any language. Each frame is both a generic component in a hierarchy of nested subassemblies, and a procedure for integrating itself with its subassembly frames (a recursive process that resolves integration conflicts in favor of higher level subassemblies). The outputs are custom documents, typically compilable source modules. Frame technology can avoid the proliferation of similar but subtly different components, an issue that has plagued software development since the invention of macros and subroutines. Most assembly languages have less powerful procedural macro facilities, for example allowing a block of code to be repeated N times for loop unrolling; but these have a completely different syntax from the actual assembly language. == Syntactic macros == Macro systems—such as the C preprocessor described earlier—that work at the level of lexical tokens cannot preserve the lexical structure reliably. Syntactic macro systems work instead at the level of abstract syntax trees, and preserve the lexical structure of the original program. The most widely used implementations of syntactic macro systems are found in Lisp-like languages. These languages are especially suited for this style of macro due to their uniform, parenthesized syntax (known as S-expressions). In particular, uniform syntax makes it easier to determine the invocations of macros. Lisp macros transform the program structure itself, with the full language available to express such transformations. While syntactic macros are often found in Lisp-like languages, they are also available in other languages such as Prolog, Erlang, Dylan, Scala, Nemerle, Rust, Elixir, Nim, Haxe, and Julia. They are also available as third-party extensions to JavaScript and C#. === Early Lisp macros === Before Lisp had macros, it had so-called FEXPRs, function-like operators whose inputs were not the values computed by the arguments but rather the syntactic forms of the arguments, and whose output were values to be used in the computation. In other words, FEXPRs were implemented at the same level as EVAL, and provided a window into the meta-evaluation layer. This was generally found to be a difficult model to reason about effectively. In 1963, Timothy Hart proposed adding macros to Lisp 1.5 in AI Memo 57: MACRO Definitions for LISP. === Anaphoric macros === An anaphoric macro is a type of programming macro that deliberately captures some form supplied to the macro which may be referred to by an anaphor (an expression referring to another). Anaphoric macros first appeared in Paul Graham's On Lisp and their name is a reference to linguistic anaphora—the use of words as a substitute for preceding words. === Hygienic macros === In the mid-eighties, a number of papers introduced the notion of hygienic macro expansion (syntax-rules), a pattern-based system where the syntactic environments of the macro definition and the macro use are distinct, allowing macro definers and users not to worry about inadvertent variable capture (cf. referential transparency). Hygienic macros have been standardized for Scheme in the R5RS, R6RS, and R7RS standards. A number of competing implementations of hygienic macros exist such as syntax-rules, syntax-case, explicit renaming, and syntactic closures. Both syntax-rules and syntax-case have been standardized in the Scheme standards. Recently, Racket has combined the notions of hygienic macros with a "tower of evaluators", so that the syntactic expansion time of one macro system is the ordinary runtime of another block of code, and showed how to apply interleaved expansion and parsing in a non-parenthesized language. A number of languages other than Scheme either implement hygienic macros or implement partially hygienic systems. Examples include Scala, Rust, Elixir, Julia, Dylan, Nim, and Nemerle. === Applications === Evaluation order Macro systems have a range of uses. Being able to choose the order of evaluation (see lazy evaluation and non-strict functions) enables the creation of new syntactic constructs (e.g. control structures) indistinguishable from those built into the language. For instance, in a Lisp dialect that has cond but lacks if, it is possible to define the latter in terms of the former using macros. For example, Scheme has both continuations and hygienic macros, which enables a programmer to design their own control abstractions, such as looping and early exit constructs, without the need to build them into the language. Data sub-languages and domain-specific languages Next, macros make it possible to define data languages that are immediately compiled into code, which means that constructs such as state machines can be implemented in a way that is both natural and efficient. Binding constructs Macros can also be used to introduce new binding constructs. The most well-known example is the transformation of let into the application of a function to a set of arguments. Felleisen conjectures that these three categories make up the primary legitimate uses of macros in such a system. Others have proposed alternative uses of macros, such as anaphoric macros in macro systems that are unhygienic or allow selective unhygienic transformation. The interaction of macros and other language features has been a productive area of research. For example, components and modules are useful for large-scale programming, but the interaction of macros and these other constructs must be defined for their use together. Module and component-systems that can interact with macros have been proposed for Scheme and other languages with macros. For example, the Racket language extends the notion of a macro system to a syntactic tower, where macros can be written in languages including macros, using hygiene to ensure that syntactic layers are distinct and allowing modules to export macros to other modules. == Macros for machine-independent software == Macros are normally used to map a short string (macro invocation) to a longer sequence of instructions. Another, less common, use of macros is to do the reverse: to map a sequence of instructions to a macro string. This was the approach taken by the STAGE2 Mobile Programming System, which used a rudimentary macro compiler (called SIMCMP) to map the specific instruction set of a given computer into machine-independent macros. Applications (notably compilers) written in these machine-independent macros can then be run without change on any computer equipped with the rudimentary macro compiler. The first application run in such a context is a more sophisticated and powerful macro compiler, written in the machine-independent macro language. This macro compiler is applied to itself, in a bootstrap fashion, to produce a compiled and much more efficient version of itself. The advantage of this approach is that complex applications can be ported from one computer to a very different computer with very little effort (for each target machine architecture, just the writing of the rudimentary macro compiler). The advent of modern programming languages, notably C, for which compilers are available on virtually all computers, has rendered such an approach superfluous. This was, however, one of the first instances (if not the first) of compiler bootstrapping. == Assembly language == While macro instructions can be defined by a programmer for any set of native assembler program instructions, typically macros are associated with macro libraries delivered with the operating system allowing access to operating system functions such as peripheral access by access methods (including macros such as OPEN, CLOSE, READ and WRITE) operating system functions such as ATTACH, WAIT and POST for subtask creation and synchronization. Typically such macros expand into executable code, e.g., for the EXIT macroinstruction, a list of define constant instructions, e.g., for the DCB macro—DTF (Define The File) for DOS—or a combination of code and constants, with the details of the expansion depending on the parameters of the macro instruction (such as a reference to a file and a data area for a READ instruction); the executable code often terminated in either a branch and link register instruction to call a routine, or a supervisor call instruction to call an operating system function directly. Generating a Stage 2 job stream for system generation in, e.g., OS/360. Unlike typical macros, sysgen stage 1 macros do not generate data or code to be loaded into storage, but rather use the PUNCH statement to output JCL and associated data. In older operating systems such as those used on IBM mainframes, full operating system functionality was only available to assembler language programs, not to high level language programs (unless assembly language subroutines were used, of course), as the standard macro instructions did not always have counterparts in routines available to high-level languages. == History == In the mid-1950s, when assembly language programming was the main way to program a computer, macro instruction features were developed to reduce source code (by generating multiple assembly statements from each macro instruction) and to enforce coding conventions (e.g. specifying input/output commands in standard ways). A macro instruction embedded in the otherwise assembly source code would be processed by a macro compiler, a preprocessor to the assembler, to replace the macro with one or more assembly instructions. The resulting code, pure assembly, would be translated to machine code by the assembler. Two of the earliest programming installations to develop macro languages for the IBM 705 computer were at Dow Chemical Corp. in Delaware and the Air Material Command, Ballistics Missile Logistics Office in California. Some consider macro instructions as an intermediate step between assembly language programming and the high-level programming languages that followed, such as FORTRAN and COBOL. By the late 1950s the macro language was followed by the Macro Assemblers. This was a combination of both where one program served both functions, that of a macro pre-processor and an assembler in the same package. Early examples are FORTRAN Assembly Program (FAP) and Macro Assembly Program (IBMAP) on the IBM 709, 7094, 7040 and 7044, and Autocoder on the 7070/7072/7074. In 1959, Douglas E. Eastwood and Douglas McIlroy of Bell Labs introduced conditional and recursive macros into the popular SAP assembler, creating what is known as Macro SAP. McIlroy's 1960 paper was seminal in the area of extending any (including high-level) programming languages through macro processors. Macro Assemblers allowed assembly language programmers to implement their own macro-language and allowed limited portability of code between two machines running the same CPU but different operating systems, for example, early versions of MS-DOS and CP/M-86. The macro library would need to be written for each target machine but not the overall assembly language program. Note that more powerful macro assemblers allowed use of conditional assembly constructs in macro instructions that could generate different code on different machines or different operating systems, reducing the need for multiple libraries. In the 1980s and early 1990s, desktop PCs were only running at a few MHz and assembly language routines were commonly used to speed up programs written in C, Fortran, Pascal and others. These languages, at the time, used different calling conventions. Macros could be used to interface routines written in assembly language to the front end of applications written in almost any language. Again, the basic assembly language code remained the same, only the macro libraries needed to be written for each target language. In modern operating systems such as Unix and its derivatives, operating system access is provided through subroutines, usually provided by dynamic libraries. High-level languages such as C offer comprehensive access to operating system functions, obviating the need for assembler language programs for such functionality. Moreover, standard libraries of several newer programming languages, such as Go, actively discourage the use of syscalls in favor of platform-agnostic libraries as well if not necessary, to improve portability and security. == See also == Anaphoric macro – type of programming macroPages displaying wikidata descriptions as a fallback Assembly language § Macros – Backstory of macros Compound operator – Basic programming language constructPages displaying short descriptions of redirect targets Extensible programming – programming mechanisms for extending the language, compiler and runtime environmentPages displaying wikidata descriptions as a fallback Fused operation – Basic programming language constructPages displaying short descriptions of redirect targets Hygienic macro – Macros whose expansion is guaranteed not to cause the capture of identifiers Macro and security – Rule for substituting a set input with a set outputPages displaying short descriptions of redirect targets Programming by demonstration – Technique for teaching a computer or a robot new behaviors String interpolation – Replacing placeholders in a string with values == References == == External links == How to write Macro Instructions Rochester Institute of Technology, Professors Powerpoint
https://en.wikipedia.org/wiki/Macro_(computer_science)
In computer science, purely functional programming usually designates a programming paradigm—a style of building the structure and elements of computer programs—that treats all computation as the evaluation of mathematical functions. Program state and mutable objects are usually modeled with temporal logic, as explicit variables that represent the program state at each step of a program execution: a variable state is passed as an input parameter of a state-transforming function, which returns the updated state as part of its return value. This style handles state changes without losing the referential transparency of the program expressions. Purely functional programming consists of ensuring that functions, inside the functional paradigm, will only depend on their arguments, regardless of any global or local state. A pure functional subroutine only has visibility of changes of state represented by state variables included in its scope. == Difference between pure and impure functional programming == The exact difference between pure and impure functional programming is a matter of controversy. Sabry's proposed definition of purity is that all common evaluation strategies (call-by-name, call-by-value, and call-by-need) produce the same result, ignoring strategies that error or diverge. A program is usually said to be functional when it uses some concepts of functional programming, such as first-class functions and higher-order functions. However, a first-class function need not be purely functional, as it may use techniques from the imperative paradigm, such as arrays or input/output methods that use mutable cells, which update their state as side effects. In fact, the earliest programming languages cited as being functional, IPL and Lisp, are both "impure" functional languages by Sabry's definition. == Properties of purely functional programming == === Strict versus non-strict evaluation === Each evaluation strategy which ends on a purely functional program returns the same result. In particular, it ensures that the programmer does not have to consider in which order programs are evaluated, since eager evaluation will return the same result as lazy evaluation. However, it is still possible that an eager evaluation may not terminate while the lazy evaluation of the same program halts. An advantage of this is that lazy evaluation can be implemented much more easily; as all expressions will return the same result at any moment (regardless of program state), their evaluation can be delayed as much as necessary. === Parallel computing === In a purely functional language, the only dependencies between computations are data dependencies, and computations are deterministic. Therefore, to program in parallel, the programmer need only specify the pieces that should be computed in parallel, and the runtime can handle all other details such as distributing tasks to processors, managing synchronization and communication, and collecting garbage in parallel. This style of programming avoids common issues such as race conditions and deadlocks, but has less control than an imperative language. To ensure a speedup, the granularity of tasks must be carefully chosen to be neither too big nor too small. In theory, it is possible to use runtime profiling and compile-time analysis to judge whether introducing parallelism will speed up the program, and thus automatically parallelize purely functional programs. In practice, this has not been terribly successful, and fully automatic parallelization is not practical. === Data structures === Purely functional data structures are persistent. Persistency is required for functional programming; without it, the same computation could return different results. Functional programming may use persistent non-purely functional data structures, while those data structures may not be used in purely functional programs. Purely functional data structures are often represented in a different way than their imperative counterparts. For example, array with constant-time access and update is a basic component of most imperative languages and many imperative data-structures, such as hash table and binary heap, are based on arrays. Arrays can be replaced by map or random access list, which admits purely functional implementation, but the access and update time is logarithmic. Therefore, purely functional data structures can be used in languages which are non-functional, but they may not be the most efficient tool available, especially if persistency is not required. In general, conversion of an imperative program to a purely functional one also requires ensuring that the formerly-mutable structures are now explicitly returned from functions that update them, a program structure called store-passing style. == Purely functional language == A purely functional language is a language which only admits purely functional programming. Purely functional programs can however be written in languages which are not purely functional. == References ==
https://en.wikipedia.org/wiki/Purely_functional_programming
In computer science, a pointer is an object in many programming languages that stores a memory address. This can be that of another value located in computer memory, or in some cases, that of memory-mapped computer hardware. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlying computer architecture. Using pointers significantly improves performance for repetitive operations, like traversing iterable data structures (e.g. strings, lookup tables, control tables, linked lists, and tree structures). In particular, it is often much cheaper in time and space to copy and dereference pointers than it is to copy and access the data to which the pointers point. Pointers are also used to hold the addresses of entry points for called subroutines in procedural programming and for run-time linking to dynamic link libraries (DLLs). In object-oriented programming, pointers to functions are used for binding methods, often using virtual method tables. A pointer is a simple, more concrete implementation of the more abstract reference data type. Several languages, especially low-level languages, support some type of pointer, although some have more restrictions on their use than others. While "pointer" has been used to refer to references in general, it more properly applies to data structures whose interface explicitly allows the pointer to be manipulated (arithmetically via pointer arithmetic) as a memory address, as opposed to a magic cookie or capability which does not allow such. Because pointers allow both protected and unprotected access to memory addresses, there are risks associated with using them, particularly in the latter case. Primitive pointers are often stored in a format similar to an integer; however, attempting to dereference or "look up" such a pointer whose value is not a valid memory address could cause a program to crash (or contain invalid data). To alleviate this potential problem, as a matter of type safety, pointers are considered a separate type parameterized by the type of data they point to, even if the underlying representation is an integer. Other measures may also be taken (such as validation and bounds checking), to verify that the pointer variable contains a value that is both a valid memory address and within the numerical range that the processor is capable of addressing. == History == In 1955, Soviet Ukrainian computer scientist Kateryna Yushchenko created the Address programming language that made possible indirect addressing and addresses of the highest rank – analogous to pointers. This language was widely used on the Soviet Union computers. However, it was unknown outside the Soviet Union and usually Harold Lawson is credited with the invention, in 1964, of the pointer. In 2000, Lawson was presented the Computer Pioneer Award by the IEEE "[f]or inventing the pointer variable and introducing this concept into PL/I, thus providing for the first time, the capability to flexibly treat linked lists in a general-purpose high-level language". His seminal paper on the concepts appeared in the June 1967 issue of CACM entitled: PL/I List Processing. According to the Oxford English Dictionary, the word pointer first appeared in print as a stack pointer in a technical memorandum by the System Development Corporation. == Formal description == In computer science, a pointer is a kind of reference. A data primitive (or just primitive) is any datum that can be read from or written to computer memory using one memory access (for instance, both a byte and a word are primitives). A data aggregate (or just aggregate) is a group of primitives that are logically contiguous in memory and that are viewed collectively as one datum (for instance, an aggregate could be 3 logically contiguous bytes, the values of which represent the 3 coordinates of a point in space). When an aggregate is entirely composed of the same type of primitive, the aggregate may be called an array; in a sense, a multi-byte word primitive is an array of bytes, and some programs use words in this way. In the context of these definitions, a byte is the smallest primitive; each memory address specifies a different byte. The memory address of the initial byte of a datum is considered the memory address (or base memory address) of the entire datum. A memory pointer (or just pointer) is a primitive, the value of which is intended to be used as a memory address; it is said that a pointer points to a memory address. It is also said that a pointer points to a datum [in memory] when the pointer's value is the datum's memory address. More generally, a pointer is a kind of reference, and it is said that a pointer references a datum stored somewhere in memory; to obtain that datum is to dereference the pointer. The feature that separates pointers from other kinds of reference is that a pointer's value is meant to be interpreted as a memory address, which is a rather low-level concept. References serve as a level of indirection: A pointer's value determines which memory address (that is, which datum) is to be used in a calculation. Because indirection is a fundamental aspect of algorithms, pointers are often expressed as a fundamental data type in programming languages; in statically (or strongly) typed programming languages, the type of a pointer determines the type of the datum to which the pointer points. == Architectural roots == Pointers are a very thin abstraction on top of the addressing capabilities provided by most modern architectures. In the simplest scheme, an address, or a numeric index, is assigned to each unit of memory in the system, where the unit is typically either a byte or a word – depending on whether the architecture is byte-addressable or word-addressable – effectively transforming all of memory into a very large array. The system would then also provide an operation to retrieve the value stored in the memory unit at a given address (usually utilizing the machine's general-purpose registers). In the usual case, a pointer is large enough to hold more addresses than there are units of memory in the system. This introduces the possibility that a program may attempt to access an address which corresponds to no unit of memory, either because not enough memory is installed (i.e. beyond the range of available memory) or the architecture does not support such addresses. The first case may, in certain platforms such as the Intel x86 architecture, be called a segmentation fault (segfault). The second case is possible in the current implementation of AMD64, where pointers are 64 bit long and addresses only extend to 48 bits. Pointers must conform to certain rules (canonical addresses), so if a non-canonical pointer is dereferenced, the processor raises a general protection fault. On the other hand, some systems have more units of memory than there are addresses. In this case, a more complex scheme such as memory segmentation or paging is employed to use different parts of the memory at different times. The last incarnations of the x86 architecture support up to 36 bits of physical memory addresses, which were mapped to the 32-bit linear address space through the PAE paging mechanism. Thus, only 1/16 of the possible total memory may be accessed at a time. Another example in the same computer family was the 16-bit protected mode of the 80286 processor, which, though supporting only 16 MB of physical memory, could access up to 1 GB of virtual memory, but the combination of 16-bit address and segment registers made accessing more than 64 KB in one data structure cumbersome. In order to provide a consistent interface, some architectures provide memory-mapped I/O, which allows some addresses to refer to units of memory while others refer to device registers of other devices in the computer. There are analogous concepts such as file offsets, array indices, and remote object references that serve some of the same purposes as addresses for other types of objects. == Uses == Pointers are directly supported without restrictions in languages such as PL/I, C, C++, Pascal, FreeBASIC, and implicitly in most assembly languages. They are used mainly to construct references, which in turn are fundamental to construct nearly all data structures, and to pass data between different parts of a program. In functional programming languages that rely heavily on lists, data references are managed abstractly by using primitive constructs like cons and the corresponding elements car and cdr, which can be thought of as specialised pointers to the first and second components of a cons-cell. This gives rise to some of the idiomatic "flavour" of functional programming. By structuring data in such cons-lists, these languages facilitate recursive means for building and processing data—for example, by recursively accessing the head and tail elements of lists of lists; e.g. "taking the car of the cdr of the cdr". By contrast, memory management based on pointer dereferencing in some approximation of an array of memory addresses facilitates treating variables as slots into which data can be assigned imperatively. When dealing with arrays, the critical lookup operation typically involves a stage called address calculation which involves constructing a pointer to the desired data element in the array. In other data structures, such as linked lists, pointers are used as references to explicitly tie one piece of the structure to another. Pointers are used to pass parameters by reference. This is useful if the programmer wants a function's modifications to a parameter to be visible to the function's caller. This is also useful for returning multiple values from a function. Pointers can also be used to allocate and deallocate dynamic variables and arrays in memory. Since a variable will often become redundant after it has served its purpose, it is a waste of memory to keep it, and therefore it is good practice to deallocate it (using the original pointer reference) when it is no longer needed. Failure to do so may result in a memory leak (where available free memory gradually, or in severe cases rapidly, diminishes because of an accumulation of numerous redundant memory blocks). === C pointers === The basic syntax to define a pointer is: This declares ptr as the identifier of an object of the following type: pointer that points to an object of type int This is usually stated more succinctly as "ptr is a pointer to int." Because the C language does not specify an implicit initialization for objects of automatic storage duration, care should often be taken to ensure that the address to which ptr points is valid; this is why it is sometimes suggested that a pointer be explicitly initialized to the null pointer value, which is traditionally specified in C with the standardized macro NULL: Dereferencing a null pointer in C produces undefined behavior, which could be catastrophic. However, most implementations simply halt execution of the program in question, usually with a segmentation fault. However, initializing pointers unnecessarily could hinder program analysis, thereby hiding bugs. In any case, once a pointer has been declared, the next logical step is for it to point at something: This assigns the value of the address of a to ptr. For example, if a is stored at memory location of 0x8130 then the value of ptr will be 0x8130 after the assignment. To dereference the pointer, an asterisk is used again: This means take the contents of ptr (which is 0x8130), "locate" that address in memory and set its value to 8. If a is later accessed again, its new value will be 8. This example may be clearer if memory is examined directly. Assume that a is located at address 0x8130 in memory and ptr at 0x8134; also assume this is a 32-bit machine such that an int is 32-bits wide. The following is what would be in memory after the following code snippet is executed: (The NULL pointer shown here is 0x00000000.) By assigning the address of a to ptr: yields the following memory values: Then by dereferencing ptr by coding: the computer will take the contents of ptr (which is 0x8130), 'locate' that address, and assign 8 to that location yielding the following memory: Clearly, accessing a will yield the value of 8 because the previous instruction modified the contents of a by way of the pointer ptr. === Use in data structures === When setting up data structures like lists, queues and trees, it is necessary to have pointers to help manage how the structure is implemented and controlled. Typical examples of pointers are start pointers, end pointers, and stack pointers. These pointers can either be absolute (the actual physical address or a virtual address in virtual memory) or relative (an offset from an absolute start address ("base") that typically uses fewer bits than a full address, but will usually require one additional arithmetic operation to resolve). Relative addresses are a form of manual memory segmentation, and share many of its advantages and disadvantages. A two-byte offset, containing a 16-bit, unsigned integer, can be used to provide relative addressing for up to 64 KiB (216 bytes) of a data structure. This can easily be extended to 128, 256 or 512 KiB if the address pointed to is forced to be aligned on a half-word, word or double-word boundary (but, requiring an additional "shift left" bitwise operation—by 1, 2 or 3 bits—in order to adjust the offset by a factor of 2, 4 or 8, before its addition to the base address). Generally, though, such schemes are a lot of trouble, and for convenience to the programmer absolute addresses (and underlying that, a flat address space) is preferred. A one byte offset, such as the hexadecimal ASCII value of a character (e.g. X'29') can be used to point to an alternative integer value (or index) in an array (e.g., X'01'). In this way, characters can be very efficiently translated from 'raw data' to a usable sequential index and then to an absolute address without a lookup table. ==== C arrays ==== In C, array indexing is formally defined in terms of pointer arithmetic; that is, the language specification requires that array[i] be equivalent to *(array + i). Thus in C, arrays can be thought of as pointers to consecutive areas of memory (with no gaps), and the syntax for accessing arrays is identical for that which can be used to dereference pointers. For example, an array array can be declared and used in the following manner: This allocates a block of five integers and names the block array, which acts as a pointer to the block. Another common use of pointers is to point to dynamically allocated memory from malloc which returns a consecutive block of memory of no less than the requested size that can be used as an array. While most operators on arrays and pointers are equivalent, the result of the sizeof operator differs. In this example, sizeof(array) will evaluate to 5*sizeof(int) (the size of the array), while sizeof(ptr) will evaluate to sizeof(int*), the size of the pointer itself. Default values of an array can be declared like: If array is located in memory starting at address 0x1000 on a 32-bit little-endian machine then memory will contain the following (values are in hexadecimal, like the addresses): Represented here are five integers: 2, 4, 3, 1, and 5. These five integers occupy 32 bits (4 bytes) each with the least-significant byte stored first (this is a little-endian CPU architecture) and are stored consecutively starting at address 0x1000. The syntax for C with pointers is: array means 0x1000; array + 1 means 0x1004: the "+ 1" means to add the size of 1 int, which is 4 bytes; *array means to dereference the contents of array. Considering the contents as a memory address (0x1000), look up the value at that location (0x0002); array[i] means element number i, 0-based, of array which is translated into *(array + i). The last example is how to access the contents of array. Breaking it down: array + i is the memory location of the (i)th element of array, starting at i=0; *(array + i) takes that memory address and dereferences it to access the value. ==== C linked list ==== Below is an example definition of a linked list in C. This pointer-recursive definition is essentially the same as the reference-recursive definition from the language Haskell: Nil is the empty list, and Cons a (Link a) is a cons cell of type a with another link also of type a. The definition with references, however, is type-checked and does not use potentially confusing signal values. For this reason, data structures in C are usually dealt with via wrapper functions, which are carefully checked for correctness. === Pass-by-address using pointers === Pointers can be used to pass variables by their address, allowing their value to be changed. For example, consider the following C code: === Dynamic memory allocation === In some programs, the required amount of memory depends on what the user may enter. In such cases the programmer needs to allocate memory dynamically. This is done by allocating memory at the heap rather than on the stack, where variables usually are stored (although variables can also be stored in the CPU registers). Dynamic memory allocation can only be made through pointers, and names – like with common variables – cannot be given. Pointers are used to store and manage the addresses of dynamically allocated blocks of memory. Such blocks are used to store data objects or arrays of objects. Most structured and object-oriented languages provide an area of memory, called the heap or free store, from which objects are dynamically allocated. The example C code below illustrates how structure objects are dynamically allocated and referenced. The standard C library provides the function malloc() for allocating memory blocks from the heap. It takes the size of an object to allocate as a parameter and returns a pointer to a newly allocated block of memory suitable for storing the object, or it returns a null pointer if the allocation failed. The code below illustrates how memory objects are dynamically deallocated, i.e., returned to the heap or free store. The standard C library provides the function free() for deallocating a previously allocated memory block and returning it back to the heap. === Memory-mapped hardware === On some computing architectures, pointers can be used to directly manipulate memory or memory-mapped devices. Assigning addresses to pointers is an invaluable tool when programming microcontrollers. Below is a simple example declaring a pointer of type int and initialising it to a hexadecimal address in this example the constant 0x7FFF: In the mid 80s, using the BIOS to access the video capabilities of PCs was slow. Applications that were display-intensive typically used to access CGA video memory directly by casting the hexadecimal constant 0xB8000 to a pointer to an array of 80 unsigned 16-bit int values. Each value consisted of an ASCII code in the low byte, and a colour in the high byte. Thus, to put the letter 'A' at row 5, column 2 in bright white on blue, one would write code like the following: === Use in control tables === Control tables that are used to control program flow usually make extensive use of pointers. The pointers, usually embedded in a table entry, may, for instance, be used to hold the entry points to subroutines to be executed, based on certain conditions defined in the same table entry. The pointers can however be simply indexes to other separate, but associated, tables comprising an array of the actual addresses or the addresses themselves (depending upon the programming language constructs available). They can also be used to point to earlier table entries (as in loop processing) or forward to skip some table entries (as in a switch or "early" exit from a loop). For this latter purpose, the "pointer" may simply be the table entry number itself and can be transformed into an actual address by simple arithmetic. == Typed pointers and casting == In many languages, pointers have the additional restriction that the object they point to has a specific type. For example, a pointer may be declared to point to an integer; the language will then attempt to prevent the programmer from pointing it to objects which are not integers, such as floating-point numbers, eliminating some errors. For example, in C money would be an integer pointer and bags would be a char pointer. The following would yield a compiler warning of "assignment from incompatible pointer type" under GCC because money and bags were declared with different types. To suppress the compiler warning, it must be made explicit that you do indeed wish to make the assignment by typecasting it which says to cast the integer pointer of money to a char pointer and assign to bags. A 2005 draft of the C standard requires that casting a pointer derived from one type to one of another type should maintain the alignment correctness for both types (6.3.2.3 Pointers, par. 7): In languages that allow pointer arithmetic, arithmetic on pointers takes into account the size of the type. For example, adding an integer number to a pointer produces another pointer that points to an address that is higher by that number times the size of the type. This allows us to easily compute the address of elements of an array of a given type, as was shown in the C arrays example above. When a pointer of one type is cast to another type of a different size, the programmer should expect that pointer arithmetic will be calculated differently. In C, for example, if the money array starts at 0x2000 and sizeof(int) is 4 bytes whereas sizeof(char) is 1 byte, then money + 1 will point to 0x2004, but bags + 1 would point to 0x2001. Other risks of casting include loss of data when "wide" data is written to "narrow" locations (e.g. bags[0] = 65537;), unexpected results when bit-shifting values, and comparison problems, especially with signed vs unsigned values. Although it is impossible in general to determine at compile-time which casts are safe, some languages store run-time type information which can be used to confirm that these dangerous casts are valid at runtime. Other languages merely accept a conservative approximation of safe casts, or none at all. === Value of pointers === In C and C++, even if two pointers compare as equal that doesn't mean they are equivalent. In these languages and LLVM, the rule is interpreted to mean that "just because two pointers point to the same address, does not mean they are equal in the sense that they can be used interchangeably", the difference between the pointers referred to as their provenance. Casting to an integer type such as uintptr_t is implementation-defined and the comparison it provides does not provide any more insight as to whether the two pointers are interchangeable. In addition, further conversion to bytes and arithmetic will throw off optimizers trying to keep track the use of pointers, a problem still being elucidated in academic research. == Making pointers safer == As a pointer allows a program to attempt to access an object that may not be defined, pointers can be the origin of a variety of programming errors. However, the usefulness of pointers is so great that it can be difficult to perform programming tasks without them. Consequently, many languages have created constructs designed to provide some of the useful features of pointers without some of their pitfalls, also sometimes referred to as pointer hazards. In this context, pointers that directly address memory (as used in this article) are referred to as raw pointers, by contrast with smart pointers or other variants. One major problem with pointers is that as long as they can be directly manipulated as a number, they can be made to point to unused addresses or to data which is being used for other purposes. Many languages, including most functional programming languages and recent imperative programming languages like Java, replace pointers with a more opaque type of reference, typically referred to as simply a reference, which can only be used to refer to objects and not manipulated as numbers, preventing this type of error. Array indexing is handled as a special case. A pointer which does not have any address assigned to it is called a wild pointer. Any attempt to use such uninitialized pointers can cause unexpected behavior, either because the initial value is not a valid address, or because using it may damage other parts of the program. The result is often a segmentation fault, storage violation or wild branch (if used as a function pointer or branch address). In systems with explicit memory allocation, it is possible to create a dangling pointer by deallocating the memory region it points into. This type of pointer is dangerous and subtle because a deallocated memory region may contain the same data as it did before it was deallocated but may be then reallocated and overwritten by unrelated code, unknown to the earlier code. Languages with garbage collection prevent this type of error because deallocation is performed automatically when there are no more references in scope. Some languages, like C++, support smart pointers, which use a simple form of reference counting to help track allocation of dynamic memory in addition to acting as a reference. In the absence of reference cycles, where an object refers to itself indirectly through a sequence of smart pointers, these eliminate the possibility of dangling pointers and memory leaks. Delphi strings support reference counting natively. The Rust programming language introduces a borrow checker, pointer lifetimes, and an optimisation based around option types for null pointers to eliminate pointer bugs, without resorting to garbage collection. == Special kinds of pointers == === Kinds defined by value === ==== Null pointer ==== A null pointer has a value reserved for indicating that the pointer does not refer to a valid object. Null pointers are routinely used to represent conditions such as the end of a list of unknown length or the failure to perform some action; this use of null pointers can be compared to nullable types and to the Nothing value in an option type. ==== Dangling pointer ==== A dangling pointer is a pointer that does not point to a valid object and consequently may make a program crash or behave oddly. In the Pascal or C programming languages, pointers that are not specifically initialized may point to unpredictable addresses in memory. The following example code shows a dangling pointer: Here, p2 may point to anywhere in memory, so performing the assignment *p2 = 'b'; can corrupt an unknown area of memory or trigger a segmentation fault. ==== Wild branch ==== Where a pointer is used as the address of the entry point to a program or start of a function which doesn't return anything and is also either uninitialized or corrupted, if a call or jump is nevertheless made to this address, a "wild branch" is said to have occurred. In other words, a wild branch is a function pointer that is wild (dangling). The consequences are usually unpredictable and the error may present itself in several different ways depending upon whether or not the pointer is a "valid" address and whether or not there is (coincidentally) a valid instruction (opcode) at that address. The detection of a wild branch can present one of the most difficult and frustrating debugging exercises since much of the evidence may already have been destroyed beforehand or by execution of one or more inappropriate instructions at the branch location. If available, an instruction set simulator can usually not only detect a wild branch before it takes effect, but also provide a complete or partial trace of its history. === Kinds defined by structure === ==== Autorelative pointer ==== An autorelative pointer is a pointer whose value is interpreted as an offset from the address of the pointer itself; thus, if a data structure has an autorelative pointer member that points to some portion of the data structure itself, then the data structure may be relocated in memory without having to update the value of the auto relative pointer. The cited patent also uses the term self-relative pointer to mean the same thing. However, the meaning of that term has been used in other ways: to mean an offset from the address of a structure rather than from the address of the pointer itself; to mean a pointer containing its own address, which can be useful for reconstructing in any arbitrary region of memory a collection of data structures that point to each other. ==== Based pointer ==== A based pointer is a pointer whose value is an offset from the value of another pointer. This can be used to store and load blocks of data, assigning the address of the beginning of the block to the base pointer. === Kinds defined by use or datatype === ==== Multiple indirection ==== In some languages, a pointer can reference another pointer, requiring multiple dereference operations to get to the original value. While each level of indirection may add a performance cost, it is sometimes necessary in order to provide correct behavior for complex data structures. For example, in C it is typical to define a linked list in terms of an element that contains a pointer to the next element of the list: This implementation uses a pointer to the first element in the list as a surrogate for the entire list. If a new value is added to the beginning of the list, head has to be changed to point to the new element. Since C arguments are always passed by value, using double indirection allows the insertion to be implemented correctly, and has the desirable side-effect of eliminating special case code to deal with insertions at the front of the list: In this case, if the value of item is less than that of head, the caller's head is properly updated to the address of the new item. A basic example is in the argv argument to the main function in C (and C++), which is given in the prototype as char **argv—this is because the variable argv itself is a pointer to an array of strings (an array of arrays), so *argv is a pointer to the 0th string (by convention the name of the program), and **argv is the 0th character of the 0th string. ==== Function pointer ==== In some languages, a pointer can reference executable code, i.e., it can point to a function, method, or procedure. A function pointer will store the address of a function to be invoked. While this facility can be used to call functions dynamically, it is often a favorite technique of virus and other malicious software writers. ==== Back pointer ==== In doubly linked lists or tree structures, a back pointer held on an element 'points back' to the item referring to the current element. These are useful for navigation and manipulation, at the expense of greater memory use. == Simulation using an array index == It is possible to simulate pointer behavior using an index to an (normally one-dimensional) array. Primarily for languages which do not support pointers explicitly but do support arrays, the array can be thought of and processed as if it were the entire memory range (within the scope of the particular array) and any index to it can be thought of as equivalent to a general-purpose register in assembly language (that points to the individual bytes but whose actual value is relative to the start of the array, not its absolute address in memory). Assuming the array is, say, a contiguous 16 megabyte character data structure, individual bytes (or a string of contiguous bytes within the array) can be directly addressed and manipulated using the name of the array with a 31 bit unsigned integer as the simulated pointer (this is quite similar to the C arrays example shown above). Pointer arithmetic can be simulated by adding or subtracting from the index, with minimal additional overhead compared to genuine pointer arithmetic. It is even theoretically possible, using the above technique, together with a suitable instruction set simulator to simulate any machine code or the intermediate (byte code) of any processor/language in another language that does not support pointers at all (for example Java / JavaScript). To achieve this, the binary code can initially be loaded into contiguous bytes of the array for the simulator to "read", interpret and execute entirely within the memory containing the same array. If necessary, to completely avoid buffer overflow problems, bounds checking can usually be inserted by the compiler (or if not, hand coded in the simulator). == Support in various programming languages == === Ada === Ada is a strongly typed language where all pointers are typed and only safe type conversions are permitted. All pointers are by default initialized to null, and any attempt to access data through a null pointer causes an exception to be raised. Pointers in Ada are called access types. Ada 83 did not permit arithmetic on access types (although many compiler vendors provided for it as a non-standard feature), but Ada 95 supports “safe” arithmetic on access types via the package System.Storage_Elements. === BASIC === Several old versions of BASIC for the Windows platform had support for STRPTR() to return the address of a string, and for VARPTR() to return the address of a variable. Visual Basic 5 also had support for OBJPTR() to return the address of an object interface, and for an ADDRESSOF operator to return the address of a function. The types of all of these are integers, but their values are equivalent to those held by pointer types. Newer dialects of BASIC, such as FreeBASIC or BlitzMax, have exhaustive pointer implementations, however. In FreeBASIC, arithmetic on ANY pointers (equivalent to C's void*) are treated as though the ANY pointer was a byte width. ANY pointers cannot be dereferenced, as in C. Also, casting between ANY and any other type's pointers will not generate any warnings. === C and C++ === In C and C++ pointers are variables that store addresses and can be null. Each pointer has a type it points to, but one can freely cast between pointer types (but not between a function pointer and an object pointer). A special pointer type called the “void pointer” allows pointing to any (non-function) object, but is limited by the fact that it cannot be dereferenced directly (it shall be cast). The address itself can often be directly manipulated by casting a pointer to and from an integral type of sufficient size, though the results are implementation-defined and may indeed cause undefined behavior; while earlier C standards did not have an integral type that was guaranteed to be large enough, C99 specifies the uintptr_t typedef name defined in <stdint.h>, but an implementation need not provide it. C++ fully supports C pointers and C typecasting. It also supports a new group of typecasting operators to help catch some unintended dangerous casts at compile-time. Since C++11, the C++ standard library also provides smart pointers (unique_ptr, shared_ptr and weak_ptr) which can be used in some situations as a safer alternative to primitive C pointers. C++ also supports another form of reference, quite different from a pointer, called simply a reference or reference type. Pointer arithmetic, that is, the ability to modify a pointer's target address with arithmetic operations (as well as magnitude comparisons), is restricted by the language standard to remain within the bounds of a single array object (or just after it), and will otherwise invoke undefined behavior. Adding or subtracting from a pointer moves it by a multiple of the size of its datatype. For example, adding 1 to a pointer to 4-byte integer values will increment the pointer's pointed-to byte-address by 4. This has the effect of incrementing the pointer to point at the next element in a contiguous array of integers—which is often the intended result. Pointer arithmetic cannot be performed on void pointers because the void type has no size, and thus the pointed address can not be added to, although gcc and other compilers will perform byte arithmetic on void* as a non-standard extension, treating it as if it were char *. Pointer arithmetic provides the programmer with a single way of dealing with different types: adding and subtracting the number of elements required instead of the actual offset in bytes. (Pointer arithmetic with char * pointers uses byte offsets, because sizeof(char) is 1 by definition.) In particular, the C definition explicitly declares that the syntax a[n], which is the n-th element of the array a, is equivalent to *(a + n), which is the content of the element pointed by a + n. This implies that n[a] is equivalent to a[n], and one can write, e.g., a[3] or 3[a] equally well to access the fourth element of an array a. While powerful, pointer arithmetic can be a source of computer bugs. It tends to confuse novice programmers, forcing them into different contexts: an expression can be an ordinary arithmetic one or a pointer arithmetic one, and sometimes it is easy to mistake one for the other. In response to this, many modern high-level computer languages (for example Java) do not permit direct access to memory using addresses. Also, the safe C dialect Cyclone addresses many of the issues with pointers. See C programming language for more discussion. The void pointer, or void*, is supported in ANSI C and C++ as a generic pointer type. A pointer to void can store the address of any object (not function), and, in C, is implicitly converted to any other object pointer type on assignment, but it must be explicitly cast if dereferenced. K&R C used char* for the “type-agnostic pointer” purpose (before ANSI C). C++ does not allow the implicit conversion of void* to other pointer types, even in assignments. This was a design decision to avoid careless and even unintended casts, though most compilers only output warnings, not errors, when encountering other casts. In C++, there is no void& (reference to void) to complement void* (pointer to void), because references behave like aliases to the variables they point to, and there can never be a variable whose type is void. ==== Pointer-to-member ==== In C++ pointers to non-static members of a class can be defined. If a class C has a member T a then &C::a is a pointer to the member a of type T C::*. This member can be an object or a function. They can be used on the right-hand side of operators .* and ->* to access the corresponding member. ==== Pointer declaration syntax overview ==== These pointer declarations cover most variants of pointer declarations. Of course it is possible to have triple pointers, but the main principles behind a triple pointer already exist in a double pointer. The naming used here is what the expression typeid(type).name() equals for each of these types when using g++ or clang. The following declarations involving pointers-to-member are valid only in C++: The () and [] have a higher priority than *. === C# === In the C# programming language, pointers are supported by either marking blocks of code that include pointers with the unsafe keyword, or by using the System.Runtime.CompilerServices assembly provisions for pointer access. The syntax is essentially the same as in C++, and the address pointed can be either managed or unmanaged memory. However, pointers to managed memory (any pointer to a managed object) must be declared using the fixed keyword, which prevents the garbage collector from moving the pointed object as part of memory management while the pointer is in scope, thus keeping the pointer address valid. However, an exception to this is from using the IntPtr structure, which is a memory managed equivalent to int*, and does not require the unsafe keyword nor the CompilerServices assembly. This type is often returned when using methods from the System.Runtime.InteropServices, for example: The .NET framework includes many classes and methods in the System and System.Runtime.InteropServices namespaces (such as the Marshal class) which convert .NET types (for example, System.String) to and from many unmanaged types and pointers (for example, LPWSTR or void*) to allow communication with unmanaged code. Most such methods have the same security permission requirements as unmanaged code, since they can affect arbitrary places in memory. === COBOL === The COBOL programming language supports pointers to variables. Primitive or group (record) data objects declared within the LINKAGE SECTION of a program are inherently pointer-based, where the only memory allocated within the program is space for the address of the data item (typically a single memory word). In program source code, these data items are used just like any other WORKING-STORAGE variable, but their contents are implicitly accessed indirectly through their LINKAGE pointers. Memory space for each pointed-to data object is typically allocated dynamically using external CALL statements or via embedded extended language constructs such as EXEC CICS or EXEC SQL statements. Extended versions of COBOL also provide pointer variables declared with USAGE IS POINTER clauses. The values of such pointer variables are established and modified using SET and SET ADDRESS statements. Some extended versions of COBOL also provide PROCEDURE-POINTER variables, which are capable of storing the addresses of executable code. === PL/I === The PL/I language provides full support for pointers to all data types (including pointers to structures), recursion, multitasking, string handling, and extensive built-in functions. PL/I was quite a leap forward compared to the programming languages of its time. PL/I pointers are untyped, and therefore no casting is required for pointer dereferencing or assignment. The declaration syntax for a pointer is DECLARE xxx POINTER;, which declares a pointer named "xxx". Pointers are used with BASED variables. A based variable can be declared with a default locator (DECLARE xxx BASED(ppp); or without (DECLARE xxx BASED;), where xxx is a based variable, which may be an element variable, a structure, or an array, and ppp is the default pointer). Such a variable can be address without an explicit pointer reference (xxx=1;, or may be addressed with an explicit reference to the default locator (ppp), or to any other pointer (qqq->xxx=1;). Pointer arithmetic is not part of the PL/I standard, but many compilers allow expressions of the form ptr = ptr±expression. IBM PL/I also has the builtin function PTRADD to perform the arithmetic. Pointer arithmetic is always performed in bytes. IBM Enterprise PL/I compilers have a new form of typed pointer called a HANDLE. === D === The D programming language is a derivative of C and C++ which fully supports C pointers and C typecasting. === Eiffel === The Eiffel object-oriented language employs value and reference semantics without pointer arithmetic. Nevertheless, pointer classes are provided. They offer pointer arithmetic, typecasting, explicit memory management, interfacing with non-Eiffel software, and other features. === Fortran === Fortran-90 introduced a strongly typed pointer capability. Fortran pointers contain more than just a simple memory address. They also encapsulate the lower and upper bounds of array dimensions, strides (for example, to support arbitrary array sections), and other metadata. An association operator, => is used to associate a POINTER to a variable which has a TARGET attribute. The Fortran-90 ALLOCATE statement may also be used to associate a pointer to a block of memory. For example, the following code might be used to define and create a linked list structure: Fortran-2003 adds support for procedure pointers. Also, as part of the C Interoperability feature, Fortran-2003 supports intrinsic functions for converting C-style pointers into Fortran pointers and back. === Go === Go has pointers. Its declaration syntax is equivalent to that of C, but written the other way around, ending with the type. Unlike C, Go has garbage collection, and disallows pointer arithmetic. Reference types, like in C++, do not exist. Some built-in types, like maps and channels, are boxed (i.e. internally they are pointers to mutable structures), and are initialized using the make function. In an approach to unified syntax between pointers and non-pointers, the arrow (->) operator has been dropped: the dot operator on a pointer refers to the field or method of the dereferenced object. This, however, only works with 1 level of indirection. === Java === There is no explicit representation of pointers in Java. Instead, more complex data structures like objects and arrays are implemented using references. The language does not provide any explicit pointer manipulation operators. It is still possible for code to attempt to dereference a null reference (null pointer), however, which results in a run-time exception being thrown. The space occupied by unreferenced memory objects is recovered automatically by garbage collection at run-time. === Modula-2 === Pointers are implemented very much as in Pascal, as are VAR parameters in procedure calls. Modula-2 is even more strongly typed than Pascal, with fewer ways to escape the type system. Some of the variants of Modula-2 (such as Modula-3) include garbage collection. === Oberon === Much as with Modula-2, pointers are available. There are still fewer ways to evade the type system and so Oberon and its variants are still safer with respect to pointers than Modula-2 or its variants. As with Modula-3, garbage collection is a part of the language specification. === Pascal === Unlike many languages that feature pointers, standard ISO Pascal only allows pointers to reference dynamically created variables that are anonymous and does not allow them to reference standard static or local variables. It does not have pointer arithmetic. Pointers also must have an associated type and a pointer to one type is not compatible with a pointer to another type (e.g. a pointer to a char is not compatible with a pointer to an integer). This helps eliminate the type security issues inherent with other pointer implementations, particularly those used for PL/I or C. It also removes some risks caused by dangling pointers, but the ability to dynamically let go of referenced space by using the dispose standard procedure (which has the same effect as the free library function found in C) means that the risk of dangling pointers has not been entirely eliminated. However, in some commercial and open source Pascal (or derivatives) compiler implementations —like Free Pascal, Turbo Pascal or the Object Pascal in Embarcadero Delphi— a pointer is allowed to reference standard static or local variables and can be cast from one pointer type to another. Moreover, pointer arithmetic is unrestricted: adding or subtracting from a pointer moves it by that number of bytes in either direction, but using the Inc or Dec standard procedures with it moves the pointer by the size of the data type it is declared to point to. An untyped pointer is also provided under the name Pointer, which is compatible with other pointer types. === Perl === The Perl programming language supports pointers, although rarely used, in the form of the pack and unpack functions. These are intended only for simple interactions with compiled OS libraries. In all other cases, Perl uses references, which are typed and do not allow any form of pointer arithmetic. They are used to construct complex data structures. == See also == == Notes == == References == == External links == PL/I List Processing Paper from the June, 1967 issue of CACM cdecl.org A tool to convert pointer declarations to plain English Over IQ.com A beginner level guide describing pointers in a plain English. Pointers and Memory Introduction to pointers – Stanford Computer Science Education Library Pointers in C programming Archived 2019-06-09 at the Wayback Machine A visual model for beginner C programmiers 0pointer.de A terse list of minimum length source codes that dereference a null pointer in several different programming languages "The C book" – containing pointer examples in ANSI C Joint Technical Committee ISO/IEC JTC 1, Subcommittee SC 22, Working Group WG 14 (2007-09-08). International Standard ISO/IEC 9899 (PDF).{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link) Committee draft.
https://en.wikipedia.org/wiki/Pointer_(computer_programming)
Tombstones are a mechanism to detect dangling pointers and mitigate the problems they can cause in computer programs. Dangling pointers can appear in certain computer programming languages, e.g. C, C++ and assembly languages. A tombstone is a structure that acts as an intermediary between a pointer and its target, often heap-dynamic data in memory. The pointer – sometimes called the handle – points only at tombstones and never to its actual target. When the data is deallocated, the tombstone is set to a null (or, more generally, to a value that is illegal for a pointer in the given runtime environment), indicating that the variable no longer exists. This mechanism prevents the use of invalid pointers, which would otherwise access the memory area that once belonged to the now deallocated variable, although it may already contain other data, in turn leading to corruption of in-memory data. Depending on the operating system, the CPU can automatically detect such an invalid access (e.g. for the null value: a null pointer dereference error). This supports in analyzing the actual reason, a programming error, in debugging, and it can also be used to abort the program in production use, to prevent it from continuing with invalid data structures. In more generalized terms, a tombstone can be understood as a marker for "this data is no longer here". For example, in filesystems it may be efficient when deleting files to mark them as "dead" instead of immediately reclaiming all their data blocks. The downsides of using tombstones include a computational overhead and additional memory consumption: extra processing is necessary to follow the path from the pointer to data through the tombstone, and extra memory is necessary to retain tombstones for every pointer throughout the program. One other problem is that all the code that needs to work with the pointers in question needs to be implemented to use the tombstone mechanism. Among popular programming languages, C++ implements the tombstone pattern in its standard library as a weak pointer using std::weak_ptr. Built–in support by programming languages or the compiler is not necessary to use this mechanism. == See also == Locks-and-keys Multiple indirection == References ==
https://en.wikipedia.org/wiki/Tombstone_(programming)
The Phoenix Program (Vietnamese: Chiến dịch Phụng Hoàng) was designed and initially coordinated by the United States Central Intelligence Agency (CIA) during the Vietnam War, involving the American, South Vietnamese militaries, and a small amount of special forces operatives from the Australian Army Training Team Vietnam. In 1970, CIA responsibility was phased out, and the program was put under the authority of the Civil Operations and Revolutionary Development Support (CORDS). The program, which lasted from 1968 to 1972, was designed to identify and destroy the Viet Cong (VC) via infiltration, assassination, torture, capture, counter-terrorism, and interrogation. The CIA described it as "a set of programs that sought to attack and destroy the political infrastructure of the Viet Cong." The Phoenix Program was premised on the idea that North Vietnamese infiltration had required local support within noncombat civilian populations, which were referred to as the "VC infrastructure" and "political branch" that had purportedly coordinated the insurgency. Throughout the program, Phoenix "neutralized" 81,740 people suspected of VC membership, of whom 26,369 were killed, and the rest surrendered or were captured. Of those killed 87% were attributed to conventional military operations by South Vietnamese and American forces, while the remaining 13% were attributed to Phoenix Program operatives.: 17–21  The Phoenix Program was heavily criticized on various grounds, including the number of neutral civilians killed, the nature of the program (which critics have labelled as a "civilian assassination program,") the use of torture and other coercive methods, and the program being exploited for personal politics. Nevertheless, the program was very successful at suppressing VC political and revolutionary activities. Public disclosure of the program led to significant criticism, including hearings by the US Congress, and the CIA was pressured into shutting it down. A similar program, Plan F-6, continued under the government of South Vietnam. == Background == Shortly after the 1954 Geneva Conference and the adoption of the Geneva Accords, the government of North Vietnam organized a force of several thousand to mobilize support for the communists in the upcoming elections. When it became clear that the elections would not take place, these forces became the seeds of what would eventually become the Viet Cong, a North Vietnamese insurgency whose goal was unification of Vietnam under the control of the North. While counterinsurgency efforts had been ongoing since the first days of US military involvement in Vietnam, they had been unsuccessful with dealing with either the armed VC or the VC's civilian infrastructure (VCI) which swelled to between 80,000 and 150,000 members by the mid 1960's. The VCI, unlike the armed component of the VC, was tasked with support activities including recruiting, political indoctrination, psychological operations, intelligence collection, and logistical support. The VCI rapidly set up shadow governments in rural South Vietnam by replacing local leadership in small rural hamlets loyal to the Saigon government with communist cadres. The VCI chose small rural villages because they lacked close supervision of the Saigon government or the South Vietnamese Army VCI tactics for establishing local communist control began by identifying towns and villages with strategic importance to either the VC or North Vietnamese People's Army of Vietnam and local populations with communist sympathies with the Hanoi government putting a great deal of emphasis on the activities and success of the VCI. After a community was identified, the VCI would threaten local leadership with reprisals if they refused to cooperate or kidnap local leaders and send them to reeducation camps in North Vietnam. Local leaders who continued to refuse to cooperate or threatened to contact the Saigon government were murdered along with their families. After VCI agents took control of an area it would be used to quarter and resupply VC guerrillas, supplying intelligence on US and South Vietnamese military movements, providing taxes to VCI cadres, and conscripting locals into the VC. == History == On 9 May 1967 all pacification efforts by the United States came under the authority of the Civil Operations and Revolutionary Development Support (CORDS). In June 1967, as part of CORDS, the Intelligence Coordination and Exploitation Program (ICEX) was created, from a plan drafted by Nelson Brickham. The purpose of the organization centered on gathering and coordinating information on the VC. In December 1967 the South Vietnamese Prime Minister signed a decree establishing Phụng Hoàng, (named after a mythical bird) to coordinate the numerous South Vietnamese entities involved in the anti-VCI campaign.: 58  The 1968 Tet Offensive demonstrated the importance of the VCI.: 50  In July 1968 South Vietnamese President Nguyễn Văn Thiệu signed a decree implementing Phụng Hoàng.: 56  The major two components of the program were Provincial Reconnaissance Units (PRUs) and regional interrogation centers. PRUs would kill or capture suspected VC members, as well as civilians who were thought to have information on VC activities. Many of these people were taken to interrogation centers and were tortured in an attempt to gain intelligence on VC activities in the area. The information extracted at the centers was given to military commanders, who would use it to task the PRU with further capture and assassination missions. The program's effectiveness was measured in the number of VC members who were "neutralized", a euphemism meaning imprisoned, persuaded to defect, or killed. The interrogation centers and PRUs were originally developed by the CIA's Saigon station chief Peer de Silva. DeSilva was a proponent of a military strategy known as counter-terrorism, which encompasses military tactics and techniques that government, military, law enforcement, and intelligence agencies use to combat or prevent terrorist activities, and that it should be applied strategically to "enemy civilians" in order to reduce civilian support for the VC. The PRUs were designed with this in mind, and began targeting suspected VC members in 1964. Originally, the PRUs were known as "Counter Terror" teams, but they were renamed to "Provincial Reconnaissance Units" after CIA officials "became wary of the adverse publicity surrounding the use of the word 'terror'". Officially, Phoenix operations continued until December 1972, although certain aspects continued until the fall of Saigon in 1975. == Agencies and individuals involved in the program == Central Intelligence Agency United States special operations forces U.S. Army intelligence collection units from the U.S. Military Assistance Command, Vietnam (MACV—the joint-service command that provided command and control for all U.S. advisory and assistance efforts in Vietnam) US Navy SEAL Detachment Bravo USMC, 1st Force Reconnaissance Company stationed near Da Nang Special forces operatives from the Australian Army Training Team Vietnam (AATTV) Republic of Vietnam National Police Field Force == Operations == The chief aspect of the Phoenix Program was the collection of intelligence information. VC members would then be captured, converted, or killed. Emphasis for the enforcement of the operation was placed on local government militia and police forces, rather than the military, as the main operational arm of the program. According to journalist Douglas Valentine, "Central to Phoenix is the fact that it targeted civilians, not soldiers". The Phoenix Program took place under special laws that allowed the arrest and prosecution of suspected communists. To avoid abuses such as phony accusations for personal reasons, or to rein in overzealous officials who might not be diligent enough in pursuing evidence before making arrests, the laws required three separate sources of evidence to convict an individual targeted for neutralization. If a suspected VC member was found guilty, they could be held in prison for two years, with renewable two-year sentences totaling up to six years. According to MACV Directive 381-41, the intent of Phoenix was to attack the VC with a "rifle shot rather than a shotgun approach to target key political leaders, command/control elements and activists in the VCI [Viet Cong Infrastructure]." The VCI was known by the communists as the Revolutionary Infrastructure. Heavy-handed operations—such as random cordons and searches, large-scale and lengthy detentions of innocent civilians, and excessive use of firepower—had a negative effect on the civilian population. Intelligence derived from interrogations was often used to carry out "search and destroy" missions aimed at finding and killing VC members. 87% of those killed during the Phoenix Program were killed in conventional military operations. Many of those killed were only identified as members of the VCI following military engagements, which were often started by the VC. Between January 1970 and March 1971, 94% of those killed as a result of the program were killed during military operations (9,827 out of 10,443 VCI killed). === Torture === According to Valentine, methods of torture that were utilized at the interrogation centers included:Rape, gang rape, rape using eels, snakes, or hard objects, and rape followed by murder; electrical shock ("the Bell Telephone Hour") rendered by attaching wires to the genitals or other sensitive parts of the body, like the tongue; "the water treatment"; "the airplane," in which a prisoner's arms were tied behind the back and the rope looped over a hook on the ceiling, suspending the prisoner in midair, after which he or she was beaten; beatings with rubber hoses and whips; and the use of police dogs to maul prisoners.Military intelligence officer K. Barton Osborn reports that he witnessed "the use of the insertion of the 6-inch dowel into the canal of one of my detainee's ears, and the tapping through the brain until dead. The starvation to death (in a cage), of a Vietnamese woman who was suspected of being part of the local political education cadre in one of the local villages ... The use of electronic gear such as sealed telephones attached to ... both the women's vaginas and men's testicles [to] shock them into submission." Osborn's claims have been refuted by author Gary Kulik, who states that Osborn made exaggerated, contradictory and false claims and that his colleagues stated that he liked making "fantastic statements" and that he "frequently made exaggerated remarks in order to attract attention to himself.": 134–138  Osborn served with the United States Marine Corps in I Corps in 1967–1968 before the Phoenix Program was implemented. Torture was carried out by South Vietnamese forces with the CIA and special forces playing a supervisory role. === Targeted killings === Phoenix operations often aimed to assassinate targets or kill them through other means. PRU units often anticipated resistance in disputed areas, and often operated on a shoot-first basis. Lieutenant Vincent Okamoto, an intelligence-liaison officer for the Phoenix Program for two months in 1968 and a recipient of the Distinguished Service Cross said the following: The problem was, how do you find the people on the blacklist? It's not like you had their address and telephone number. The normal procedure would be to go into a village and just grab someone and say, "Where's Nguyen so-and-so?" Half the time the people were so afraid they would not say anything. Then a Phoenix team would take the informant, put a sandbag over his head, poke out two holes so he could see, put commo wire around his neck like a long leash, and walk him through the village and say, "When we go by Nguyen's house scratch your head." Then that night Phoenix would come back, knock on the door, and say, "April Fool, motherfucker." Whoever answered the door would get wasted. As far as they were concerned whoever answered was a Communist, including family members. Sometimes they'd come back to camp with ears to prove that they killed people. William Colby denied that the program was an assassination program stating: "To call it a program of murder is nonsense ... They were of more value to us alive than dead, and therefore, the object was to get them alive." His instructions to field officers stated "Our training emphasizes the desirability of obtaining these target individuals alive and of using intelligent and lawful methods of interrogation to obtain the truth of what they know about other aspects of the VCI ... [U.S. personnel] are specifically not authorized to engage in assassinations or other violations of the rules of land warfare." == Strategic and operational effect == Between 1968 and 1972, Phoenix officially "neutralized" (meaning imprisoned, persuaded to defect, or killed) 81,740 people suspected of VC membership, of whom 26,369 were killed, while Seymour Hersh wrote that South Vietnamese official statistics estimated that 41,000 were killed. A significant number of VC were killed, and between 1969 and 1971, the program was quite successful in destroying VC infrastructure in many important areas. 87 percent of those killed in the program were attributed to conventional military operations by South Vietnamese and American forces; the remainder were killed by Phoenix Program operatives.: 17–21  By 1970, communist plans repeatedly emphasized attacking the government's pacification program and specifically targeted Phoenix officials. The VC imposed assassination quotas. In 1970, for example, communist officials near Da Nang in northern South Vietnam instructed their assassins to "kill 1,400 persons" deemed to be government "tyrant[s]" and to "annihilate" anyone involved with the pacification program.: 20–21  Several North Vietnamese officials have made statements about the effectiveness of Phoenix. According to William Colby, "in the years since 1975, I have heard several references to North and South Vietnamese communists who state that, in their mind, the toughest period that they faced from 1960 to 1975 was the period from 1968 to '72 when the Phoenix Program was at work." The CIA said that through Phoenix they were able to learn the identity and structure of the VCI in every province. According to Stuart A. Herrington: "Regardless of how effective the Phoenix Program was or wasn't, area by area, the communists thought it was very effective. They saw it as a significant threat to the viability of the revolution because, to the extent that you could ... carve out the shadow government, their means of control over the civilian population was dealt a death blow. And that's why, when the war was over, the North Vietnamese reserved "special treatment" for those who had worked in the Phoenix Program. They considered it a mortal threat to the revolution." == Public response and legal proceedings == The Phoenix Program was not generally known during most of the time it was operational to either the American public or American officials in Washington. In 1970, author Frances FitzGerald made several arguments to then-U.S. National Security Advisor Henry Kissinger against the program, which she alludes to in Fire in the Lake. One of the first people to criticize Phoenix publicly was Ed Murphy, a peace activist and former military intelligence soldier, in 1970. There was eventually a series of U.S. Congressional hearings. In 1971, in the final day of hearing on "U.S. Assistance Programs in Vietnam", Osborn described the Phoenix Program as a "sterile depersonalized murder program." Consequently, the military command in Vietnam issued a directive that reiterated that it had based the anti-VCI campaign on South Vietnamese law, that the program was in compliance with the laws of land warfare, and that U.S. personnel had the responsibility to report breaches of the law. Former CIA analyst Samuel A. Adams, in an interview with CBC News, talked about the program as basically an assassination program that also included torture. They would also kill people by throwing them out of helicopters to threaten and intimidate those they wanted to interrogate. While acknowledging that "No one can prove the null hypothesis that no prisoner was ever thrown from a helicopter," Gary Kulik states that "no such story has ever been corroborated" and that the noise inside a helicopter would make conducting an interrogation impossible.: 138  According to Nick Turse, abuses were common. In many instances, rival Vietnamese would report their enemies as "VC" in order to get U.S. troops to kill them. In many cases, Phung Hoang chiefs were incompetent bureaucrats who used their positions to enrich themselves. Phoenix tried to address this problem by establishing monthly neutralization quotas, but these often led to fabrications or, worse, false arrests. In some cases, district officials accepted bribes from the VC to release certain suspects. After Phoenix Program abuses began receiving negative publicity, the program was officially shut down, although it continued under the name Plan F-6 with the government of South Vietnam in control. == See also == Edward Lansdale Nguyễn Hợp Đoàn Operation Condor Russell Tribunal Special Activities Division Tran Ngoc Chau United States war crimes Vietnam War Crimes Working Group Winter Soldier Investigation == Notes == == Citations == == References == == Further reading == Buckley, Kevin (19 June 1972). "Pacification's Deadly Price". Newsweek. pp. 42–43. Chomsky, Noam; Herman, Edward S. (1973). Counter-Revolutionary Violence: Bloodbaths in Fact & Propaganda. Andover, Mass.: Warner Modular Publications. OCLC 2358907. Complete text at Noam Chomsky's Web site. Cook, John L. (1973). The Advisor: The Phoenix Program in Vietnam. Philadelphia: Dorrance & Company. ISBN 978-0-8059-1925-7. OCLC 250035420. Grant, Zalin (1991). Facing the Phoenix: The CIA and the Political Defeat of the United States in Vietnam. New York: W. W. Norton. ISBN 978-0-393-02925-3. OCLC 440829893. Herrington, Stuart (1982). Silence Was a Weapon: The Vietnam War in the Villages: A Personal Perspective. Novato, Cal.: Presidio Press. ISBN 978-0-89141-140-6. OCLC 7923168. Reprinted as Stalking the Vietcong: Inside Operation Phoenix: A Personal Account. Herman, Edward S.; Chomsky, Noam (1979). The Washington Connection and Third World Fascism. Political Economy of Human Rights: Volume 1. Boston: South End Press. ISBN 978-0-89608-090-4. OCLC 855290980. Hersh, Seymour (1972). Cover-Up: The Army's Secret Investigation of the Massacre at My Lai 4. New York: Random House. ISBN 978-0-394-47460-1. OCLC 251832675. Luce, Don (1973). Hostages of War: Saigon's Political Prisoners. Washington, D.C.: Indochina Resource Center. OCLC 471579109. McCoy, Alfred W. (2012). Torture and Impunity: The U.S. Doctrine of Coercive Interrogation. University of Wisconsin Press. Moyar, Mark (1997). Phoenix and the Birds of Prey: The CIA's Secret Campaign to Destroy the Viet Cong. Annapolis, Md.: Naval Institute Press. ISBN 978-1-55750-593-4. OCLC 468627566. Scott, Peter (1998). The Lost Crusade: America's Secret Cambodian Mercenaries. Annapolis, Md.: Naval Institute Press. ISBN 978-1-55750-846-1. OCLC 466612519. Tran Ngoc Chau; Fermoyle, Ken (2013). Vietnam Labyrinth: Allies, Enemies and Why the US Lost the War. Lubbock, Texas: Texas Tech University Press. ISBN 978-0-89672-771-7. OCLC 939834065. == External links == Documents and Taped Interviews from the Phoenix Program at the Internet Archive Vietnam: A Television History; Interview with Carl F. Bernard, 1981 on the Vietnam War, including the effectiveness of the Phoenix Program. WGBH-TV Open Vault. Served in World War II, Korea, Laos and Vietnam.
https://en.wikipedia.org/wiki/Phoenix_Program
This is a list of functional programming topics. == Foundational concepts == Programming paradigm Declarative programming Programs as mathematical objects Function-level programming Purely functional programming Total functional programming Lambda programming Static scoping Higher-order function Referential transparency == Lambda calculus == Currying Lambda abstraction Church–Rosser theorem Extensionality Church numeral == Combinatory logic == Fixed point combinator SKI combinator calculus B, C, K, W system SECD machine Graph reduction machine == Intuitionistic logic == Sequent, sequent calculus Natural deduction Intuitionistic type theory BHK interpretation Curry–Howard correspondence Linear logic Game semantics == Type theory == Typed lambda calculus Typed and untyped languages Type signature Type inference Datatype Algebraic data type (generalized) Type variable First-class value Polymorphism Calculus of constructions == Denotational semantics == Domain theory Directed complete partial order Knaster–Tarski theorem == Category theory == Cartesian closed category Yoneda lemma == Operational issues == Graph reduction Combinator graph reduction Strict programming language Lazy evaluation, eager evaluation Speculative evaluation Side effect Assignment Setq Closure Continuation Continuation passing style Operational semantics State transition system Simulation preorder Bisimulation Monads in functional programming Exception handling Garbage collection == Programming languages == Clean Clojure Elixir Erlang FP F# Haskell Glasgow Haskell Compiler Gofer Hugs Template Haskell ISWIM JavaScript Kent Recursive Calculator Lisp AutoLISP Common Lisp Emacs Lisp Scheme Mercury Miranda ML (Category:ML programming language family) OCaml Standard ML Pure, predecessor Q Q (programming language from Kx Systems) Quantum programming Scala SISAL Ωmega
https://en.wikipedia.org/wiki/List_of_functional_programming_topics
Programming languages have been classified into several programming language generations. Historically, this classification was used to indicate increasing power of programming styles. Later writers have somewhat redefined the meanings as distinctions previously seen as important became less significant to current practice. == Generations == === First generation (1GL) === A first-generation programming language (1GL) is a machine-level programming language. These are the languages that can be directly executed by a central processing unit (CPU). The instructions in 1GL are expressed in binary, represented as 1s and 0s (or occasionally via octal or hexadecimal to the programmer). This makes the language suitable for execution by the machine but far more difficult for human programmer to learn and interpret. First-generation programming languages are rarely used by programmers in the twenty-first century, but they were universally used to program early computers, before assembly languages were invented and when computer time was too scarce to be spent running an assembler. === Second generation (2GL) === Examples: assembly languages Second-generation programming language (2GL) is a generational way to categorize assembly languages. === Third generation (3GL) === Examples: C, C++, Java, Python, PHP, Perl, C#, BASIC, Pascal, Fortran, ALGOL, COBOL 3GLs are much more machine-independent (portable) and more programmer-friendly. This includes features like improved support for aggregate data types and expressing concepts in a way that favors the programmer, not the computer. A third-generation language improves over a second-generation language by having the computer take care of non-essential details. 3GLs are more abstract than previous generations of languages, and thus can be considered higher-level languages than their first- and second-generation counterparts. First introduced in the late 1950s, Fortran, ALGOL, and COBOL are examples of early 3GLs. Most popular general-purpose languages today, such as C, C++, C#, Java, and BASIC, are also third-generation languages, although each of these languages can be further subdivided into other categories based on other contemporary traits. Most 3GLs support structured programming. Many support object-oriented programming. Traits like these are more often used to describe a language rather than just being a 3GL. === Fourth generation (4GL) === Examples: ABAP, Unix shell, SQL, PL/SQL, Oracle Reports, R, Halide Fourth-generation languages tend to be specialized toward very specific programming domains. 4GLs may include support for database management, report generation, mathematical optimization, GUI development, or web development. === Fifth generation (5GL) === Examples: Prolog, OPS5, Mercury, CVXGen , Geometry Expert A fifth-generation programming language (5GL) is any programming language based on problem-solving using constraints given to the program, rather than using an algorithm written by a programmer. They may use artificial intelligence techniques to solve problems in this way. Most constraint-based and logic programming languages and some other declarative languages are fifth-generation languages. While fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the programmer. This way, the user only needs to worry about what problems need to be solved and what conditions need to be met, without worrying about how to implement a routine or algorithm to solve them. Fifth-generation languages are used mainly in Artificial Intelligence or AI research. OPS5 and Mercury are examples of fifth-generation languages, as is ICAD, which was built upon Lisp. KL-ONE is an example of a related idea, a frame language. == History == The terms "first-generation" and "second-generation" programming language were not used prior to the coining of the term "third-generation"; none of these three terms are mentioned in early compendiums of programming languages. The introduction of a third generation of computer technology coincided with the creation of a new generation of programming languages. The marketing for this generational shift in machines correlated with several important changes in what were called high-level programming languages, discussed below, giving technical content to the second/third-generation distinction among high-level programming languages as well while retroactively renaming Machine code languages as first generation, and assembly languages as second generation. Initially, all programming languages at a higher level than assembly were termed "third-generation", but later on, the term "fourth-generation" was introduced to try to differentiate the (then) new declarative languages (such as Prolog and domain-specific languages) which claimed to operate at an even higher level, and in a domain even closer to the user (e.g. at a natural-language level) than the original, imperative high-level languages such as Pascal, C, ALGOL, Fortran, BASIC, etc. "Generational" classification of high-level languages (third generation and later) was never fully precise and was later perhaps abandoned, with more precise classifications gaining common usage, such as object-oriented, declarative and functional. C gave rise to C++ and later to Java and C#; Lisp to CLOS; Ada to Ada 2012; and even COBOL to COBOL 2002. New languages have emerged in that "generation" as well. == See also == Timeline of programming languages == References ==
https://en.wikipedia.org/wiki/Programming_language_generations
This is a list of television programs currently or formerly broadcast by Cartoon Network in the United States. The network was launched on October 1, 1992, and airs mainly animated programming, ranging from action to animated comedy. In its early years, Cartoon Network's programming was predominantly made up of reruns of Looney Tunes, Tom and Jerry, and Hanna-Barbera shows. Cartoon Network's first original series was The Moxy Show and the late-night satirical animated talk show Space Ghost Coast to Coast (the latter moving to Adult Swim at launch on September 2, 2001). The What a Cartoon! series of showcase shorts brought the creation of many Cartoon Network original series collectives branded as "Cartoon Cartoons" in 1995. Cartoon Network has also broadcast several feature films, mostly animated or containing animated sequences, under its "Cartoon Theater" block, later renamed "Flicks". == Current programming == === Original programming === ==== Cartoon Network Studios ==== ==== Warner Bros. Animation ==== ==== Hanna-Barbera Studios Europe ==== ==== Preschool (Cartoonito) ==== === Acquired programming === ==== American co-productions ==== ==== Canadian co-productions ==== ==== French/Canadian co-productions ==== ==== Preschool ==== === Repeats of ended programming === ==== Cartoon Network Studios ==== ==== Warner Bros. Animation ==== ==== Hanna-Barbera Cartoons ==== ==== Canadian co-productions ==== == Upcoming programming == === Original programming === ==== Cartoon Network Studios ==== ==== Warner Bros. Animation ==== ==== Preschool (Cartoonito) ==== === Acquired programming === ==== Preschool ==== == Former programming == An asterisk (*) indicates that the program initially aired as a Cartoon Network program. A double-asterisk (**) indicates that the program became a Boomerang program. A triple-asterisk (***) indicates that the program became an Adult Swim/Toonami program. === Original programming === ==== Cartoon Network Studios ==== ==== Warner Bros. Animation ==== ==== Hanna-Barbera Cartoons ==== ==== Hanna-Barbera Studios Europe ==== ==== Williams Street Productions ==== ==== Live-action and live-action/animated series ==== ==== Preschool/Tickle-U/Cartoonito ==== ==== Anthology series ==== ==== Miniseries ==== ==== Short series ==== === Programming from Hanna-Barbera/Turner Entertainment Co. === === Programming from Warner Bros. Animation === === Programming from Adult Swim === === Acquired programming === ==== Canadian co-productions ==== ==== European co-productions ==== ==== Animated ==== ==== Anime ==== ==== Live-action and live-action/animated series ==== ==== Preschool (Cartoonito) ==== === Former specials === == Programming blocks == === Current programming blocks === === Former programming blocks === == Pilots == === Short format === This is a list of pilot episodes on Cartoon Network, along with their premiere dates for each. ==== Picked up ==== ==== Not picked up ==== === Long format === This is a list of pilot movies on Cartoon Network, along with their status and premiere dates for each. == See also == List of Cartoon Network films List of programs broadcast by Cartoonito List of programs broadcast by Adult Swim List of programs broadcast by Boomerang List of programs broadcast by Toonami List of programs broadcast by Discovery Family List of Cartoon Network Studios productions Hanna-Barbera Studios Europe filmography == Notes == == References ==
https://en.wikipedia.org/wiki/List_of_programs_broadcast_by_Cartoon_Network
In computer programming, orthogonality means that operations change just one thing without affecting others. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. It is associated with simplicity; the more orthogonal the design, the fewer exceptions. This makes it easier to learn, read and write programs in a programming language. The meaning of an orthogonal feature is independent of context; the key parameters are symmetry and consistency (for example, a pointer is an orthogonal concept). An example from IBM Mainframe and VAX highlights this concept. An IBM mainframe has two different instructions for adding the contents of a register to a memory cell (or another register). These statements are shown below: A Reg1, memory_cell AR Reg1, Reg2 In the first case, the contents of Reg1 are added to the contents of a memory cell; the result is stored in Reg1. In the second case, the contents of Reg1 are added to the contents of another register (Reg2) and the result is stored in Reg1. In contrast to the above set of statements, VAX has only one statement for addition: ADDL operand1, operand2 In this case the two operands (operand1 and operand2) can be registers, memory cells, or a combination of both; the instruction adds the contents of operand1 to the contents of operand2, storing the result in operand1. VAX's instruction for addition is more orthogonal than the instructions provided by IBM; hence, it is easier for the programmer to remember (and use) the one provided by VAX. The Revised Report on the Algorithmic Language Algol 68 had this to say about "Orthogonal design": The number of independent primitive concepts has been minimized in order that the language be easy to describe, to learn, and to implement. On the other hand, these concepts have been applied "orthogonally" in order to maximize the expressive power of the language while trying to avoid deleterious superfluities. The design of C language may be examined from the perspective of orthogonality. The C language is somewhat inconsistent in its treatment of concepts and language structure, making it difficult for the user to learn (and use) the language. Examples of exceptions follow: Structures (but not arrays) may be returned from a function. An array can be returned if it is inside a structure. A member of a structure can be any data type (except void, or the structure of the same type). An array element can be any data type (except void). Everything is passed by value (except arrays). Though this concept was first applied to programming languages, orthogonality has since become recognized as a valuable feature in the design of APIs and even user interfaces. There, too, having a small set of composable primitive operations without surprising cross-linkages is valuable, as it leads to systems that are easier to explain and less frustrating to use. On the other hand, orthogonality does not necessarily result in simpler systems, as the example of IBM and VAX instructions shows — in the end, the less orthogonal RISC CPU architectures were more successful than the CISC architectures. == See also == Coupling (computer programming) Cohesion (computer science) == References == == Further reading == The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas. Addison-Wesley. 2000. ISBN 978-0-201-61622-4. A. van Wijngaarden, Orthogonal Design and Description of a Formal Language, Mathematisch Centrum, Amsterdam, MR 76, October 1965. == External links == "The Art of Unix Programming", chapter about Orthogonality – Orthogonality concept well-explained
https://en.wikipedia.org/wiki/Orthogonality_(programming)
A programming team is a team of people who develop or maintain computer software. They may be organised in numerous ways, but the egoless programming team and chief programmer team have been common structures. == Description == A programming team comprises people who develop or maintain computer software. == Programming team structures == Programming teams may be organised in numerous ways, but the egoless programming team and chief programmer team are two common structures typically used. The main determinants when choosing the programming team structure typically include: difficulty, size, duration, modularity, reliability, time, and sociability. === Egoless programming === According to Marilyn Mantei, individuals that are a part of a decentralized programming team report higher job satisfaction. But an egoless programming team contains groups of ten or fewer programmers. Code is exchanged and goals are set amongst the group members. Leadership is rotated within the group according to the needs and abilities required during a specific time. The lack of structure in the egoless team can result in a weakness of efficiency, effectiveness, and error detection for large-scale projects. Egoless programming teams work best for tasks that are very complex. === Chief programmer team === A chief programmer team will usually contain three-person teams consisting of a chief programmer, senior level programmer, and a program librarian. Additional programmers and analysts are added to the team when necessary. The weaknesses of this structure include a lack of communication across team members, task cooperation, and complex task completion. The chief programmer team works best for tasks that are simpler and straightforward since the flow of information in the team is limited. Individuals that work in this team structure typically report lower work morale. === Shared workstation teams === ==== Pair programming ==== A development technique where two programmers work together at one workstation. ==== Mob programming ==== A software development approach where the whole team works on the same thing, at the same time, in the same space, and at the same computer. == Programming models == Programming models allow software development teams to develop, deploy, and test projects using these different methodologies. Throughout both of these programming models, team members typically participate in daily 5 - 15 minute stand-ups. Traditionally, each member of the team will stand up and state what they have worked on since the previous stand-up, what they intend to work on until the next stand-up, and whether or not there is anything preventing them from making progress, often known as a "blocker". === Waterfall model === The waterfall model, noted as the more traditional approach, is a linear model of production. The sequence of events of this methodology follows as: Gather and document requirements Design Code and unit test Perform system testing Perform user acceptance testing (UAT) Fix any issues Deliver the finished product Each stage is distinct during the software development process, and each stage generally finishes before the next one can begin. Programming teams using this model are able to design the project early on in the development process allowing teams to focus on coding and testing during the bulk of the work instead of constantly reiterating design. This also allows teams to design completely and more carefully so that teams can have a complete understanding of all software deliverables. === Agile model === The Agile development model is a more team-based approach to development than the previous waterfall model. Teams work in rapid delivery/deployment which splits work into phases called "sprints". Sprints are usually defined as two weeks of planned software deliverables given to each team/team member. After each sprint, work is reprioritized and the information learned from the previous sprint is used for future sprint planning. As the sprint work is complete, it can be reviewed and evaluated by the programming team and sent back for another iteration (i.e. next sprint) or closed if completed. The general principles of the Agile Manifesto are as follows: Satisfy the customer and continually develop software. Changing requirements are embraced for the customer's competitive advantage. Concentrate on delivering working software frequently. Delivery preference will be placed on the shortest possible time span. Developers and business people must work together throughout the entire project. Projects must be based on people who are motivated. Give them the proper environment and the support that they need. They should be trusted to get their jobs done. Face-to-face communication is the best way to transfer information to and from a team. Working software is the primary measurement of progress. Agile processes will promote development that is sustainable. Sponsors, developers, and users should be able to maintain an indefinite, constant pace. Constant attention to technical excellence and good design will enhance agility. Simplicity is considered to be the art of maximizing the work that is not done, and it is essential. Self-organized teams usually create the best designs. At regular intervals, the team will reflect on how to become more effective, and they will tune and adjust their behavior accordingly. == See also == Cross-functional team Scrum (software development) Software development process Team software process Project team == References ==
https://en.wikipedia.org/wiki/Programming_team
This article lists concurrent and parallel programming languages, categorizing them by a defining paradigm. Concurrent and parallel programming languages involve multiple timelines. Such languages provide synchronization constructs whose behavior is defined by a parallel execution model. A concurrent programming language is defined as one which uses the concept of simultaneously executing processes or threads of execution as a means of structuring a program. A parallel language is able to express programs that are executable on more than one processor. Both types are listed, as concurrency is a useful tool in expressing parallelism, but it is not necessary. In both cases, the features must be part of the language syntax and not an extension such as a library (libraries such as the posix-thread library implement a parallel execution model but lack the syntax and grammar required to be a programming language). The following categories aim to capture the main, defining feature of the languages contained, but they are not necessarily orthogonal. == Coordination languages == CnC (Concurrent Collections) Glenda Linda coordination language Millipede == Dataflow programming == CAL E (also object-oriented) Joule (also distributed) LabVIEW (also synchronous, also object-oriented) Lustre (also synchronous) Preesm (also synchronous) Signal (also synchronous) SISAL BMDFM == Distributed computing == Bloom Emerald Hermes Julia Limbo MPD Oz - Multi-paradigm language with particular support for constraint and distributed programming. Sequoia SR == Event-driven and hardware description == Esterel (also synchronous) SystemC SystemVerilog Verilog Verilog-AMS - math modeling of continuous time systems VHDL == Functional programming == Clojure Concurrent ML Elixir Elm Erlang Futhark Gleam Haskell Id MultiLisp SequenceL == Logic programming == Constraint Handling Rules Parlog Mercury == Monitor-based == Concurrent Pascal Concurrent Euclid Emerald == Multi-threaded == C= Cilk Cilk Plus Cind C# Clojure Concurrent Pascal Emerald Fork – programming language for the PRAM model. Go Java LabVIEW ParaSail Python Rust SequenceL == Object-oriented programming == Ada C* C# JavaScript TypeScript C++ AMP Charm++ Cind D Eiffel Simple Concurrent Object-Oriented Programming (SCOOP) Emerald Fortran – from ISO Fortran 2003 standard Java Join Java – has features from join-calculus LabVIEW ParaSail Python Ruby == Partitioned global address space (PGAS) == Chapel Coarray Fortran (included in standard/ISO Fortran since Fortran 2008, further extensions were added with the Fortran 2018 standard) Fortress High Performance Fortran Titanium Unified Parallel C X10 ZPL == Message passing == Ateji PX - An extension of Java with parallel primitives inspired from pi-calculus. Rust Smalltalk: p.17 Part IV, see table following fig. 11–29  === Actor model === Axum - a domain-specific language being developed by Microsoft. Dart - using Isolates Elixir (runs on BEAM, the Erlang virtual machine) Erlang Pony Janus Red SALSA Scala/Akka (toolkit) Smalltalk Akka.NET LabVIEW - LabVIEW Actor Framework === CSP-based === Alef Crystal Ease FortranM Go JCSP JoCaml Joyce Limbo (also distributed) Newsqueak Occam Occam-π – a derivative of Occam that integrates features from the pi-calculus PyCSP SuperPascal XC – a C-based language, integrating features from Occam, developed by XMOS == APIs/frameworks == These application programming interfaces support parallelism in host languages. Apache Beam Apache Flink Apache Hadoop Apache Spark CUDA OpenCL OpenHMPP OpenMP for C, C++, and Fortran (shared memory and attached GPUs) Message Passing Interface for C, C++, and Fortran (distributed computing) SYCL == See also == Concurrent computing List of concurrent programming languages Parallel programming model == References ==
https://en.wikipedia.org/wiki/List_of_concurrent_and_parallel_programming_languages
In computing, type introspection is the ability of a program to examine the type or properties of an object at runtime. Some programming languages possess this capability. Introspection should not be confused with reflection, which goes a step further and is the ability for a program to manipulate the metadata, properties, and functions of an object at runtime. Some programming languages also possess that capability (e.g., Java, Python, Julia, and Go). == Examples == === Objective-C === In Objective-C, for example, both the generic Object and NSObject (in Cocoa/OpenStep) provide the method isMemberOfClass: which returns true if the argument to the method is an instance of the specified class. The method isKindOfClass: analogously returns true if the argument inherits from the specified class. For example, say we have an Apple and an Orange class inheriting from Fruit. Now, in the eat method we can write Now, when eat is called with a generic object (an id), the function will behave correctly depending on the type of the generic object. === C++ === C++ supports type introspection via the run-time type information (RTTI) typeid and dynamic cast keywords. The dynamic_cast expression can be used to determine whether a particular object is of a particular derived class. For instance: The typeid operator retrieves a std::type_info object describing the most derived type of an object: === Object Pascal === Type introspection has been a part of Object Pascal since the original release of Delphi, which uses RTTI heavily for visual form design. In Object Pascal, all classes descend from the base TObject class, which implements basic RTTI functionality. Every class's name can be referenced in code for RTTI purposes; the class name identifier is implemented as a pointer to the class's metadata, which can be declared and used as a variable of type TClass. The language includes an is operator, to determine if an object is or descends from a given class, an as operator, providing a type-checked typecast, and several TObject methods. Deeper introspection (enumerating fields and methods) is traditionally only supported for objects declared in the $M+ (a pragma) state, typically TPersistent, and only for symbols defined in the published section. Delphi 2010 increased this to nearly all symbols. === Java === The simplest example of type introspection in Java is the instanceof operator. The instanceof operator determines whether a particular object belongs to a particular class (or a subclass of that class, or a class that implements that interface). For instance: The java.lang.Class class is the basis of more advanced introspection. For instance, if it is desirable to determine the actual class of an object (rather than whether it is a member of a particular class), Object.getClass() and Class.getName() can be used: === PHP === In PHP introspection can be done using instanceof operator. For instance: === Perl === Introspection can be achieved using the ref and isa functions in Perl. We can introspect the following classes and their corresponding instances: using: ==== Meta-Object Protocol ==== Much more powerful introspection in Perl can be achieved using the Moose object system and the Class::MOP meta-object protocol; for example, you can check if a given object does a role X: This is how you can list fully qualified names of all of the methods that can be invoked on the object, together with the classes in which they were defined: === Python === The most common method of introspection in Python is using the dir function to detail the attributes of an object. For example: Also, the built-in functions type and isinstance can be used to determine what an object is while hasattr can determine what an object does. For example: === Ruby === Type introspection is a core feature of Ruby. In Ruby, the Object class (ancestor of every class) provides Object#instance_of? and Object#kind_of? methods for checking the instance's class. The latter returns true when the particular instance the message was sent to is an instance of a descendant of the class in question. For example, consider the following example code (you can immediately try this with the Interactive Ruby Shell): In the example above, the Class class is used as any other class in Ruby. Two classes are created, A and B, the former is being a superclass of the latter, then one instance of each class is checked. The last expression gives true because A is a superclass of the class of b. Further, you can directly ask for the class of any object, and "compare" them (code below assumes having executed the code above): === ActionScript === In ActionScript (as3), the function flash.utils.getQualifiedClassName can be used to retrieve the class/type name of an arbitrary object. Alternatively, the operator is can be used to determine if an object is of a specific type: This second function can be used to test class inheritance parents as well: ==== Meta-type introspection ==== Like Perl, ActionScript can go further than getting the class name, but all the metadata, functions and other elements that make up an object using the flash.utils.describeType function; this is used when implementing reflection in ActionScript. == See also == Reification (computer science) typeof == References == == External links == Introspection on Rosetta Code
https://en.wikipedia.org/wiki/Type_introspection
Dangal TV is a Hindi general entertainment channel owned by Enterr10 Television Network. It was launched on 2009 as a Bhojpuri-language movie channel but was converted into a Hindi entertainment channel. Its programming consists of Soap opera fantasy romance romantic shows. == History == Dangal TV was launched as a Bhojpuri movie channel in 2010 for India. In 2015, it was converted into a Hindi general entertainment channel by acquiring and aired most of the Hindi television shows from the former defunct entertainment channel Imagine TV and along with television channels like DD National, Star Plus, Sony Entertainment Television, Zee TV, Colors TV and Sahara One. In 2017, Dangal TV made an entry into original content called Crime Alert (the first one-hour television show based on real-life crimes in India), Bahurani (the first original reality show) and Shivarjun: Ek Ichhadhari Ki Dastaan (the first original soap opera). After making an entry to original space, Dangal TV became the most watched Hindi-language entertainment channel by beating paid television channels. In 2021, Dangal TV made a deal with Viacom 18 by streaming their entire shows to their OTT app Voot. In the same year, they stopped airing acquired shows until October and started airing only original content shows. It was converted into an original entertainment channel, as their acquired shows were moved to the newly launched entertainment channel Dangal 2. == Reception == Dangal TV gained a response from Hindi speaking viewers by acquiring television shows from Hindi television channels, along with original shows like Mann Sundar, Mann Ati Sundar, Nath, Gehna, Crime Alert, Aye Mere Humsafar, Pyaar Ki Luka Chuppi, Ranju Ki Betiyaan, and Prem Bandhan. == Programming == === Current broadcasts === === Former broadcasts === Acquired series === Original programming === ==== Anthology series ==== ==== Children/teen series ==== ==== Comedy series ==== ==== Drama series ==== ==== Mythological series ==== ==== Reality/non-scripted programming ==== == References == == External links == Official website
https://en.wikipedia.org/wiki/Dangal_(TV_channel)
Output budgeting is a wide-ranging management technique introduced in the United States in the mid-1960s by Robert S. McNamara's collaborator Charles J. Hitch, not always with ready cooperation with the administrators and based on the industrial management techniques of program budgeting. Subsequently, the technique has been introduced in other countries including Canada and the UK. Planning, Programming, and Budgeting System (PPBS) is in effect an integration of a number of techniques in a planning and budgeting process for identifying, costing and assigning a complexity of resources for establishing priorities and strategies in a major program and for forecasting costs, expenditure and achievements within the immediate financial year or over a longer period. United States Department of Defense leaders use their Planning, Programming, and Budgeting System to link operational requirements with financial obligations. Department of Defense branches typically divides the process into plans, programs and budgets. While planning, programming, and budgeting continues throughout the year, PPBS dictates a sequential and an annual process culminating with annual Defense Plan, followed by a Defense Program, and then a Defense Budget. PPBS requires Planners focus on operational requirements, programmers link the plans to a six-year financial plan (known as a Future Years Defense Plan (FYDP)), and budgeters prepare a two-year Congressional budget. While each step contains more detailed financial data, the two-year Congressional budget stems from the six-year Future Years Defense Plan, which is based on the even longer term Defense Plan. == Literature == Department of Education and Science, London, England. (1970).Output Budgeting for the Department of Education and Science, Pendragon House, Inc. 176 p. Christine E. Bonham, Elizabeth A. McClarin (2000), Accrual Budgeting: Experiences of Other Nations and Implications for the United States, DIANE, 217 p. == References == Performance Budgeting: Linking Funding and Results, Marc Robinson (ed.), IMF, 2007 From Line-item to Program Budgeting, John Kim, Seoul, 2007 Program and Performance Budgeting Enthusiasm in India -- IMF Training Course, Holger van Eden, IMF, 2007
https://en.wikipedia.org/wiki/Output_budgeting
Originally introduced by Richard E. Bellman in (Bellman 1957), stochastic dynamic programming is a technique for modelling and solving problems of decision making under uncertainty. Closely related to stochastic programming and dynamic programming, stochastic dynamic programming represents the problem under scrutiny in the form of a Bellman equation. The aim is to compute a policy prescribing how to act optimally in the face of uncertainty. == A motivating example: Gambling game == A gambler has $2, she is allowed to play a game of chance 4 times and her goal is to maximize her probability of ending up with a least $6. If the gambler bets $ b {\displaystyle b} on a play of the game, then with probability 0.4 she wins the game, recoup the initial bet, and she increases her capital position by $ b {\displaystyle b} ; with probability 0.6, she loses the bet amount $ b {\displaystyle b} ; all plays are pairwise independent. On any play of the game, the gambler may not bet more money than she has available at the beginning of that play. Stochastic dynamic programming can be employed to model this problem and determine a betting strategy that, for instance, maximizes the gambler's probability of attaining a wealth of at least $6 by the end of the betting horizon. Note that if there is no limit to the number of games that can be played, the problem becomes a variant of the well known St. Petersburg paradox. == Formal background == Consider a discrete system defined on n {\displaystyle n} stages in which each stage t = 1 , … , n {\displaystyle t=1,\ldots ,n} is characterized by an initial state s t ∈ S t {\displaystyle s_{t}\in S_{t}} , where S t {\displaystyle S_{t}} is the set of feasible states at the beginning of stage t {\displaystyle t} ; a decision variable x t ∈ X t {\displaystyle x_{t}\in X_{t}} , where X t {\displaystyle X_{t}} is the set of feasible actions at stage t {\displaystyle t} – note that X t {\displaystyle X_{t}} may be a function of the initial state s t {\displaystyle s_{t}} ; an immediate cost/reward function p t ( s t , x t ) {\displaystyle p_{t}(s_{t},x_{t})} , representing the cost/reward at stage t {\displaystyle t} if s t {\displaystyle s_{t}} is the initial state and x t {\displaystyle x_{t}} the action selected; a state transition function g t ( s t , x t ) {\displaystyle g_{t}(s_{t},x_{t})} that leads the system towards state s t + 1 = g t ( s t , x t ) {\displaystyle s_{t+1}=g_{t}(s_{t},x_{t})} . Let f t ( s t ) {\displaystyle f_{t}(s_{t})} represent the optimal cost/reward obtained by following an optimal policy over stages t , t + 1 , … , n {\displaystyle t,t+1,\ldots ,n} . Without loss of generality in what follow we will consider a reward maximisation setting. In deterministic dynamic programming one usually deals with functional equations taking the following structure f t ( s t ) = max x t ∈ X t { p t ( s t , x t ) + f t + 1 ( s t + 1 ) } {\displaystyle f_{t}(s_{t})=\max _{x_{t}\in X_{t}}\{p_{t}(s_{t},x_{t})+f_{t+1}(s_{t+1})\}} where s t + 1 = g t ( s t , x t ) {\displaystyle s_{t+1}=g_{t}(s_{t},x_{t})} and the boundary condition of the system is f n ( s n ) = max x n ∈ X n { p n ( s n , x n ) } . {\displaystyle f_{n}(s_{n})=\max _{x_{n}\in X_{n}}\{p_{n}(s_{n},x_{n})\}.} The aim is to determine the set of optimal actions that maximise f 1 ( s 1 ) {\displaystyle f_{1}(s_{1})} . Given the current state s t {\displaystyle s_{t}} and the current action x t {\displaystyle x_{t}} , we know with certainty the reward secured during the current stage and – thanks to the state transition function g t {\displaystyle g_{t}} – the future state towards which the system transitions. In practice, however, even if we know the state of the system at the beginning of the current stage as well as the decision taken, the state of the system at the beginning of the next stage and the current period reward are often random variables that can be observed only at the end of the current stage. Stochastic dynamic programming deals with problems in which the current period reward and/or the next period state are random, i.e. with multi-stage stochastic systems. The decision maker's goal is to maximise expected (discounted) reward over a given planning horizon. In their most general form, stochastic dynamic programs deal with functional equations taking the following structure f t ( s t ) = max x t ∈ X t ( s t ) { ( expected reward during stage t ∣ s t , x t ) + α ∑ s t + 1 Pr ( s t + 1 ∣ s t , x t ) f t + 1 ( s t + 1 ) } {\displaystyle f_{t}(s_{t})=\max _{x_{t}\in X_{t}(s_{t})}\left\{({\text{expected reward during stage }}t\mid s_{t},x_{t})+\alpha \sum _{s_{t+1}}\Pr(s_{t+1}\mid s_{t},x_{t})f_{t+1}(s_{t+1})\right\}} where f t ( s t ) {\displaystyle f_{t}(s_{t})} is the maximum expected reward that can be attained during stages t , t + 1 , … , n {\displaystyle t,t+1,\ldots ,n} , given state s t {\displaystyle s_{t}} at the beginning of stage t {\displaystyle t} ; x t {\displaystyle x_{t}} belongs to the set X t ( s t ) {\displaystyle X_{t}(s_{t})} of feasible actions at stage t {\displaystyle t} given initial state s t {\displaystyle s_{t}} ; α {\displaystyle \alpha } is the discount factor; Pr ( s t + 1 ∣ s t , x t ) {\displaystyle \Pr(s_{t+1}\mid s_{t},x_{t})} is the conditional probability that the state at the end of stage t {\displaystyle t} is s t + 1 {\displaystyle s_{t+1}} given current state s t {\displaystyle s_{t}} and selected action x t {\displaystyle x_{t}} . Markov decision processes represent a special class of stochastic dynamic programs in which the underlying stochastic process is a stationary process that features the Markov property. === Gambling game as a stochastic dynamic program === Gambling game can be formulated as a Stochastic Dynamic Program as follows: there are n = 4 {\displaystyle n=4} games (i.e. stages) in the planning horizon the state s {\displaystyle s} in period t {\displaystyle t} represents the initial wealth at the beginning of period t {\displaystyle t} ; the action given state s {\displaystyle s} in period t {\displaystyle t} is the bet amount b {\displaystyle b} ; the transition probability p i , j a {\displaystyle p_{i,j}^{a}} from state i {\displaystyle i} to state j {\displaystyle j} when action a {\displaystyle a} is taken in state i {\displaystyle i} is easily derived from the probability of winning (0.4) or losing (0.6) a game. Let f t ( s ) {\displaystyle f_{t}(s)} be the probability that, by the end of game 4, the gambler has at least $6, given that she has $ s {\displaystyle s} at the beginning of game t {\displaystyle t} . the immediate profit incurred if action b {\displaystyle b} is taken in state s {\displaystyle s} is given by the expected value p t ( s , b ) = 0.4 f t + 1 ( s + b ) + 0.6 f t + 1 ( s − b ) {\displaystyle p_{t}(s,b)=0.4f_{t+1}(s+b)+0.6f_{t+1}(s-b)} . To derive the functional equation, define b t ( s ) {\displaystyle b_{t}(s)} as a bet that attains f t ( s ) {\displaystyle f_{t}(s)} , then at the beginning of game t = 4 {\displaystyle t=4} if s < 3 {\displaystyle s<3} it is impossible to attain the goal, i.e. f 4 ( s ) = 0 {\displaystyle f_{4}(s)=0} for s < 3 {\displaystyle s<3} ; if s ≥ 6 {\displaystyle s\geq 6} the goal is attained, i.e. f 4 ( s ) = 1 {\displaystyle f_{4}(s)=1} for s ≥ 6 {\displaystyle s\geq 6} ; if 3 ≤ s ≤ 5 {\displaystyle 3\leq s\leq 5} the gambler should bet enough to attain the goal, i.e. f 4 ( s ) = 0.4 {\displaystyle f_{4}(s)=0.4} for 3 ≤ s ≤ 5 {\displaystyle 3\leq s\leq 5} . For t < 4 {\displaystyle t<4} the functional equation is f t ( s ) = max b t ( s ) { 0.4 f t + 1 ( s + b ) + 0.6 f t + 1 ( s − b ) } {\displaystyle f_{t}(s)=\max _{b_{t}(s)}\{0.4f_{t+1}(s+b)+0.6f_{t+1}(s-b)\}} , where b t ( s ) {\displaystyle b_{t}(s)} ranges in 0 , . . . , s {\displaystyle 0,...,s} ; the aim is to find f 1 ( 2 ) {\displaystyle f_{1}(2)} . Given the functional equation, an optimal betting policy can be obtained via forward recursion or backward recursion algorithms, as outlined below. == Solution methods == Stochastic dynamic programs can be solved to optimality by using backward recursion or forward recursion algorithms. Memoization is typically employed to enhance performance. However, like deterministic dynamic programming also its stochastic variant suffers from the curse of dimensionality. For this reason approximate solution methods are typically employed in practical applications. === Backward recursion === Given a bounded state space, backward recursion (Bertsekas 2000) begins by tabulating f n ( k ) {\displaystyle f_{n}(k)} for every possible state k {\displaystyle k} belonging to the final stage n {\displaystyle n} . Once these values are tabulated, together with the associated optimal state-dependent actions x n ( k ) {\displaystyle x_{n}(k)} , it is possible to move to stage n − 1 {\displaystyle n-1} and tabulate f n − 1 ( k ) {\displaystyle f_{n-1}(k)} for all possible states belonging to the stage n − 1 {\displaystyle n-1} . The process continues by considering in a backward fashion all remaining stages up to the first one. Once this tabulation process is complete, f 1 ( s ) {\displaystyle f_{1}(s)} – the value of an optimal policy given initial state s {\displaystyle s} – as well as the associated optimal action x 1 ( s ) {\displaystyle x_{1}(s)} can be easily retrieved from the table. Since the computation proceeds in a backward fashion, it is clear that backward recursion may lead to computation of a large number of states that are not necessary for the computation of f 1 ( s ) {\displaystyle f_{1}(s)} . ==== Example: Gambling game ==== === Forward recursion === Given the initial state s {\displaystyle s} of the system at the beginning of period 1, forward recursion (Bertsekas 2000) computes f 1 ( s ) {\displaystyle f_{1}(s)} by progressively expanding the functional equation (forward pass). This involves recursive calls for all f t + 1 ( ⋅ ) , f t + 2 ( ⋅ ) , … {\displaystyle f_{t+1}(\cdot ),f_{t+2}(\cdot ),\ldots } that are necessary for computing a given f t ( ⋅ ) {\displaystyle f_{t}(\cdot )} . The value of an optimal policy and its structure are then retrieved via a (backward pass) in which these suspended recursive calls are resolved. A key difference from backward recursion is the fact that f t {\displaystyle f_{t}} is computed only for states that are relevant for the computation of f 1 ( s ) {\displaystyle f_{1}(s)} . Memoization is employed to avoid recomputation of states that have been already considered. ==== Example: Gambling game ==== We shall illustrate forward recursion in the context of the Gambling game instance previously discussed. We begin the forward pass by considering f 1 ( 2 ) = min { b success probability in periods 1,2,3,4 0 0.4 f 2 ( 2 + 0 ) + 0.6 f 2 ( 2 − 0 ) 1 0.4 f 2 ( 2 + 1 ) + 0.6 f 2 ( 2 − 1 ) 2 0.4 f 2 ( 2 + 2 ) + 0.6 f 2 ( 2 − 2 ) {\displaystyle f_{1}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 1,2,3,4}}\\\hline 0&0.4f_{2}(2+0)+0.6f_{2}(2-0)\\1&0.4f_{2}(2+1)+0.6f_{2}(2-1)\\2&0.4f_{2}(2+2)+0.6f_{2}(2-2)\\\end{array}}\right.} At this point we have not computed yet f 2 ( 4 ) , f 2 ( 3 ) , f 2 ( 2 ) , f 2 ( 1 ) , f 2 ( 0 ) {\displaystyle f_{2}(4),f_{2}(3),f_{2}(2),f_{2}(1),f_{2}(0)} , which are needed to compute f 1 ( 2 ) {\displaystyle f_{1}(2)} ; we proceed and compute these items. Note that f 2 ( 2 + 0 ) = f 2 ( 2 − 0 ) = f 2 ( 2 ) {\displaystyle f_{2}(2+0)=f_{2}(2-0)=f_{2}(2)} , therefore one can leverage memoization and perform the necessary computations only once. Computation of f 2 ( 4 ) , f 2 ( 3 ) , f 2 ( 2 ) , f 2 ( 1 ) , f 2 ( 0 ) {\displaystyle f_{2}(4),f_{2}(3),f_{2}(2),f_{2}(1),f_{2}(0)} f 2 ( 0 ) = min { b success probability in periods 2,3,4 0 0.4 f 3 ( 0 + 0 ) + 0.6 f 3 ( 0 − 0 ) {\displaystyle f_{2}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(0+0)+0.6f_{3}(0-0)\\\end{array}}\right.} f 2 ( 1 ) = min { b success probability in periods 2,3,4 0 0.4 f 3 ( 1 + 0 ) + 0.6 f 3 ( 1 − 0 ) 1 0.4 f 3 ( 1 + 1 ) + 0.6 f 3 ( 1 − 1 ) {\displaystyle f_{2}(1)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(1+0)+0.6f_{3}(1-0)\\1&0.4f_{3}(1+1)+0.6f_{3}(1-1)\\\end{array}}\right.} f 2 ( 2 ) = min { b success probability in periods 2,3,4 0 0.4 f 3 ( 2 + 0 ) + 0.6 f 3 ( 2 − 0 ) 1 0.4 f 3 ( 2 + 1 ) + 0.6 f 3 ( 2 − 1 ) 2 0.4 f 3 ( 2 + 2 ) + 0.6 f 3 ( 2 − 2 ) {\displaystyle f_{2}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(2+0)+0.6f_{3}(2-0)\\1&0.4f_{3}(2+1)+0.6f_{3}(2-1)\\2&0.4f_{3}(2+2)+0.6f_{3}(2-2)\\\end{array}}\right.} f 2 ( 3 ) = min { b success probability in periods 2,3,4 0 0.4 f 3 ( 3 + 0 ) + 0.6 f 3 ( 3 − 0 ) 1 0.4 f 3 ( 3 + 1 ) + 0.6 f 3 ( 3 − 1 ) 2 0.4 f 3 ( 3 + 2 ) + 0.6 f 3 ( 3 − 2 ) 3 0.4 f 3 ( 3 + 3 ) + 0.6 f 3 ( 3 − 3 ) {\displaystyle f_{2}(3)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(3+0)+0.6f_{3}(3-0)\\1&0.4f_{3}(3+1)+0.6f_{3}(3-1)\\2&0.4f_{3}(3+2)+0.6f_{3}(3-2)\\3&0.4f_{3}(3+3)+0.6f_{3}(3-3)\\\end{array}}\right.} f 2 ( 4 ) = min { b success probability in periods 2,3,4 0 0.4 f 3 ( 4 + 0 ) + 0.6 f 3 ( 4 − 0 ) 1 0.4 f 3 ( 4 + 1 ) + 0.6 f 3 ( 4 − 1 ) 2 0.4 f 3 ( 4 + 2 ) + 0.6 f 3 ( 4 − 2 ) {\displaystyle f_{2}(4)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(4+0)+0.6f_{3}(4-0)\\1&0.4f_{3}(4+1)+0.6f_{3}(4-1)\\2&0.4f_{3}(4+2)+0.6f_{3}(4-2)\end{array}}\right.} We have now computed f 2 ( k ) {\displaystyle f_{2}(k)} for all k {\displaystyle k} that are needed to compute f 1 ( 2 ) {\displaystyle f_{1}(2)} . However, this has led to additional suspended recursions involving f 3 ( 4 ) , f 3 ( 3 ) , f 3 ( 2 ) , f 3 ( 1 ) , f 3 ( 0 ) {\displaystyle f_{3}(4),f_{3}(3),f_{3}(2),f_{3}(1),f_{3}(0)} . We proceed and compute these values. Computation of f 3 ( 4 ) , f 3 ( 3 ) , f 3 ( 2 ) , f 3 ( 1 ) , f 3 ( 0 ) {\displaystyle f_{3}(4),f_{3}(3),f_{3}(2),f_{3}(1),f_{3}(0)} f 3 ( 0 ) = min { b success probability in periods 3,4 0 0.4 f 4 ( 0 + 0 ) + 0.6 f 4 ( 0 − 0 ) {\displaystyle f_{3}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(0+0)+0.6f_{4}(0-0)\\\end{array}}\right.} f 3 ( 1 ) = min { b success probability in periods 3,4 0 0.4 f 4 ( 1 + 0 ) + 0.6 f 4 ( 1 − 0 ) 1 0.4 f 4 ( 1 + 1 ) + 0.6 f 4 ( 1 − 1 ) {\displaystyle f_{3}(1)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(1+0)+0.6f_{4}(1-0)\\1&0.4f_{4}(1+1)+0.6f_{4}(1-1)\\\end{array}}\right.} f 3 ( 2 ) = min { b success probability in periods 3,4 0 0.4 f 4 ( 2 + 0 ) + 0.6 f 4 ( 2 − 0 ) 1 0.4 f 4 ( 2 + 1 ) + 0.6 f 4 ( 2 − 1 ) 2 0.4 f 4 ( 2 + 2 ) + 0.6 f 4 ( 2 − 2 ) {\displaystyle f_{3}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(2+0)+0.6f_{4}(2-0)\\1&0.4f_{4}(2+1)+0.6f_{4}(2-1)\\2&0.4f_{4}(2+2)+0.6f_{4}(2-2)\\\end{array}}\right.} f 3 ( 3 ) = min { b success probability in periods 3,4 0 0.4 f 4 ( 3 + 0 ) + 0.6 f 4 ( 3 − 0 ) 1 0.4 f 4 ( 3 + 1 ) + 0.6 f 4 ( 3 − 1 ) 2 0.4 f 4 ( 3 + 2 ) + 0.6 f 4 ( 3 − 2 ) 3 0.4 f 4 ( 3 + 3 ) + 0.6 f 4 ( 3 − 3 ) {\displaystyle f_{3}(3)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(3+0)+0.6f_{4}(3-0)\\1&0.4f_{4}(3+1)+0.6f_{4}(3-1)\\2&0.4f_{4}(3+2)+0.6f_{4}(3-2)\\3&0.4f_{4}(3+3)+0.6f_{4}(3-3)\\\end{array}}\right.} f 3 ( 4 ) = min { b success probability in periods 3,4 0 0.4 f 4 ( 4 + 0 ) + 0.6 f 4 ( 4 − 0 ) 1 0.4 f 4 ( 4 + 1 ) + 0.6 f 4 ( 4 − 1 ) 2 0.4 f 4 ( 4 + 2 ) + 0.6 f 4 ( 4 − 2 ) {\displaystyle f_{3}(4)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(4+0)+0.6f_{4}(4-0)\\1&0.4f_{4}(4+1)+0.6f_{4}(4-1)\\2&0.4f_{4}(4+2)+0.6f_{4}(4-2)\end{array}}\right.} f 3 ( 5 ) = min { b success probability in periods 3,4 0 0.4 f 4 ( 5 + 0 ) + 0.6 f 4 ( 5 − 0 ) 1 0.4 f 4 ( 5 + 1 ) + 0.6 f 4 ( 5 − 1 ) {\displaystyle f_{3}(5)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(5+0)+0.6f_{4}(5-0)\\1&0.4f_{4}(5+1)+0.6f_{4}(5-1)\end{array}}\right.} Since stage 4 is the last stage in our system, f 4 ( ⋅ ) {\displaystyle f_{4}(\cdot )} represent boundary conditions that are easily computed as follows. Boundary conditions f 4 ( 0 ) = 0 b 4 ( 0 ) = 0 f 4 ( 1 ) = 0 b 4 ( 1 ) = { 0 , 1 } f 4 ( 2 ) = 0 b 4 ( 2 ) = { 0 , 1 , 2 } f 4 ( 3 ) = 0.4 b 4 ( 3 ) = { 3 } f 4 ( 4 ) = 0.4 b 4 ( 4 ) = { 2 , 3 , 4 } f 4 ( 5 ) = 0.4 b 4 ( 5 ) = { 1 , 2 , 3 , 4 , 5 } f 4 ( d ) = 1 b 4 ( d ) = { 0 , … , d − 6 } for d ≥ 6 {\displaystyle {\begin{array}{ll}f_{4}(0)=0&b_{4}(0)=0\\f_{4}(1)=0&b_{4}(1)=\{0,1\}\\f_{4}(2)=0&b_{4}(2)=\{0,1,2\}\\f_{4}(3)=0.4&b_{4}(3)=\{3\}\\f_{4}(4)=0.4&b_{4}(4)=\{2,3,4\}\\f_{4}(5)=0.4&b_{4}(5)=\{1,2,3,4,5\}\\f_{4}(d)=1&b_{4}(d)=\{0,\ldots ,d-6\}{\text{ for }}d\geq 6\end{array}}} At this point it is possible to proceed and recover the optimal policy and its value via a backward pass involving, at first, stage 3 Backward pass involving f 3 ( ⋅ ) {\displaystyle f_{3}(\cdot )} f 3 ( 0 ) = min { b success probability in periods 3,4 0 0.4 ( 0 ) + 0.6 ( 0 ) = 0 {\displaystyle f_{3}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4(0)+0.6(0)=0\\\end{array}}\right.} f 3 ( 1 ) = min { b success probability in periods 3,4 max 0 0.4 ( 0 ) + 0.6 ( 0 ) = 0 ← b 3 ( 1 ) = 0 1 0.4 ( 0 ) + 0.6 ( 0 ) = 0 ← b 3 ( 1 ) = 1 {\displaystyle f_{3}(1)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0&\leftarrow b_{3}(1)=0\\1&0.4(0)+0.6(0)=0&\leftarrow b_{3}(1)=1\\\end{array}}\right.} f 3 ( 2 ) = min { b success probability in periods 3,4 max 0 0.4 ( 0 ) + 0.6 ( 0 ) = 0 1 0.4 ( 0.4 ) + 0.6 ( 0 ) = 0.16 ← b 3 ( 2 ) = 1 2 0.4 ( 0.4 ) + 0.6 ( 0 ) = 0.16 ← b 3 ( 2 ) = 2 {\displaystyle f_{3}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0\\1&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{3}(2)=1\\2&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{3}(2)=2\\\end{array}}\right.} f 3 ( 3 ) = min { b success probability in periods 3,4 max 0 0.4 ( 0.4 ) + 0.6 ( 0.4 ) = 0.4 ← b 3 ( 3 ) = 0 1 0.4 ( 0.4 ) + 0.6 ( 0 ) = 0.16 2 0.4 ( 0.4 ) + 0.6 ( 0 ) = 0.16 3 0.4 ( 1 ) + 0.6 ( 0 ) = 0.4 ← b 3 ( 3 ) = 3 {\displaystyle f_{3}(3)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(3)=0\\1&0.4(0.4)+0.6(0)=0.16\\2&0.4(0.4)+0.6(0)=0.16\\3&0.4(1)+0.6(0)=0.4&\leftarrow b_{3}(3)=3\\\end{array}}\right.} f 3 ( 4 ) = min { b success probability in periods 3,4 max 0 0.4 ( 0.4 ) + 0.6 ( 0.4 ) = 0.4 ← b 3 ( 4 ) = 0 1 0.4 ( 0.4 ) + 0.6 ( 0.4 ) = 0.4 ← b 3 ( 4 ) = 1 2 0.4 ( 1 ) + 0.6 ( 0 ) = 0.4 ← b 3 ( 4 ) = 2 {\displaystyle f_{3}(4)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(4)=0\\1&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(4)=1\\2&0.4(1)+0.6(0)=0.4&\leftarrow b_{3}(4)=2\\\end{array}}\right.} f 3 ( 5 ) = min { b success probability in periods 3,4 max 0 0.4 ( 0.4 ) + 0.6 ( 0.4 ) = 0.4 1 0.4 ( 1 ) + 0.6 ( 0.4 ) = 0.64 ← b 3 ( 5 ) = 1 {\displaystyle f_{3}(5)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4\\1&0.4(1)+0.6(0.4)=0.64&\leftarrow b_{3}(5)=1\\\end{array}}\right.} and, then, stage 2. Backward pass involving f 2 ( ⋅ ) {\displaystyle f_{2}(\cdot )} f 2 ( 0 ) = min { b success probability in periods 2,3,4 max 0 0.4 ( 0 ) + 0.6 ( 0 ) = 0 ← b 2 ( 0 ) = 0 {\displaystyle f_{2}(0)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0&\leftarrow b_{2}(0)=0\\\end{array}}\right.} f 2 ( 1 ) = min { b success probability in periods 2,3,4 max 0 0.4 ( 0 ) + 0.6 ( 0 ) = 0 1 0.4 ( 0.16 ) + 0.6 ( 0 ) = 0.064 ← b 2 ( 1 ) = 1 {\displaystyle f_{2}(1)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0\\1&0.4(0.16)+0.6(0)=0.064&\leftarrow b_{2}(1)=1\\\end{array}}\right.} f 2 ( 2 ) = min { b success probability in periods 2,3,4 max 0 0.4 ( 0.16 ) + 0.6 ( 0.16 ) = 0.16 ← b 2 ( 2 ) = 0 1 0.4 ( 0.4 ) + 0.6 ( 0 ) = 0.16 ← b 2 ( 2 ) = 1 2 0.4 ( 0.4 ) + 0.6 ( 0 ) = 0.16 ← b 2 ( 2 ) = 2 {\displaystyle f_{2}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.16)+0.6(0.16)=0.16&\leftarrow b_{2}(2)=0\\1&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{2}(2)=1\\2&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{2}(2)=2\\\end{array}}\right.} f 2 ( 3 ) = min { b success probability in periods 2,3,4 max 0 0.4 ( 0.4 ) + 0.6 ( 0.4 ) = 0.4 ← b 2 ( 3 ) = 0 1 0.4 ( 0.4 ) + 0.6 ( 0.16 ) = 0.256 2 0.4 ( 0.64 ) + 0.6 ( 0 ) = 0.256 3 0.4 ( 1 ) + 0.6 ( 0 ) = 0.4 ← b 2 ( 3 ) = 3 {\displaystyle f_{2}(3)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{2}(3)=0\\1&0.4(0.4)+0.6(0.16)=0.256\\2&0.4(0.64)+0.6(0)=0.256\\3&0.4(1)+0.6(0)=0.4&\leftarrow b_{2}(3)=3\\\end{array}}\right.} f 2 ( 4 ) = min { b success probability in periods 2,3,4 max 0 0.4 ( 0.4 ) + 0.6 ( 0.4 ) = 0.4 1 0.4 ( 0.64 ) + 0.6 ( 0.4 ) = 0.496 ← b 2 ( 4 ) = 1 2 0.4 ( 1 ) + 0.6 ( 0.16 ) = 0.496 ← b 2 ( 4 ) = 2 {\displaystyle f_{2}(4)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4\\1&0.4(0.64)+0.6(0.4)=0.496&\leftarrow b_{2}(4)=1\\2&0.4(1)+0.6(0.16)=0.496&\leftarrow b_{2}(4)=2\\\end{array}}\right.} We finally recover the value f 1 ( 2 ) {\displaystyle f_{1}(2)} of an optimal policy f 1 ( 2 ) = min { b success probability in periods 1,2,3,4 max 0 0.4 ( 0.16 ) + 0.6 ( 0.16 ) = 0.16 1 0.4 ( 0.4 ) + 0.6 ( 0.064 ) = 0.1984 ← b 1 ( 2 ) = 1 2 0.4 ( 0.496 ) + 0.6 ( 0 ) = 0.1984 ← b 1 ( 2 ) = 2 {\displaystyle f_{1}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 1,2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.16)+0.6(0.16)=0.16\\1&0.4(0.4)+0.6(0.064)=0.1984&\leftarrow b_{1}(2)=1\\2&0.4(0.496)+0.6(0)=0.1984&\leftarrow b_{1}(2)=2\\\end{array}}\right.} This is the optimal policy that has been previously illustrated. Note that there are multiple optimal policies leading to the same optimal value f 1 ( 2 ) = 0.1984 {\displaystyle f_{1}(2)=0.1984} ; for instance, in the first game one may either bet $1 or $2. Python implementation. The one that follows is a complete Python implementation of this example. Java implementation. GamblersRuin.java is a standalone Java 8 implementation of the above example. === Approximate dynamic programming === An introduction to approximate dynamic programming is provided by (Powell 2009). == Sources == Bellman, R. (1957), Dynamic Programming, Princeton University Press, ISBN 978-0-486-42809-3 {{citation}}: ISBN / Date incompatibility (help). Dover paperback edition (2003) Bertsekas, D. P. (2000), Dynamic Programming and Optimal Control (2nd ed.), Athena Scientific, ISBN 978-1-886529-09-0. In two volumes. Powell, W. B. (2009), "What you should know about approximate dynamic programming", Naval Research Logistics, 56 (1): 239–249, CiteSeerX 10.1.1.150.1854, doi:10.1002/nav.20347, S2CID 7134937 == Further reading == Ross, S. M.; Bimbaum, Z. W.; Lukacs, E. (1983), Introduction to Stochastic Dynamic Programming, Elsevier, ISBN 978-0-12-598420-1. == See also == == References ==
https://en.wikipedia.org/wiki/Stochastic_dynamic_programming
In computer software, a general-purpose programming language (GPL) is a programming language for building software in a wide variety of application domains. Conversely, a domain-specific programming language (DSL) is used within a specific area. For example, Python is a GPL, while SQL is a DSL for querying relational databases. == History == Early programming languages were designed for scientific computing (numerical calculations) or commercial data processing, as was computer hardware. Scientific languages such as Fortran and Algol supported floating-point calculations and multidimensional arrays, while business languages such as COBOL supported fixed-field file formats and data records. Much less widely used were specialized languages such as IPL-V and LISP for symbolic list processing; COMIT for string manipulation; APT for numerically controlled machines. Systems programming requiring pointer manipulation was typically done in assembly language, though JOVIAL was used for some military applications. IBM's System/360, announced in 1964, was designed as a unified hardware architecture supporting both scientific and commercial applications, and IBM developed PL/I for it as a single, general-purpose language that supported scientific, commercial, and systems programming. Indeed, a subset of PL/I was used as the standard systems programming language for the Multics operating system. Since PL/I, the distinction between scientific and commercial programming languages has diminished, with most languages supporting the basic features required by both, and much of the special file format handling delegated to specialized database management systems. Many specialized languages were also developed starting in the 1960s: GPSS and Simula for discrete event simulation; MAD, BASIC, Logo, and Pascal for teaching programming; C for systems programming; JOSS and APL\360 for interactive programming. == GPL vs. DSL == The distinction between general-purpose programming languages and domain-specific programming languages is not always clear. A programming language may be created for a specific task, but used beyond that original domain and thus be considered a general purpose programming language. For example, COBOL, Fortran, and Lisp were created as DSLs (for business processing, numeric computation, and symbolic processing), but became GPL's over time. Inversely, a language may be designed for general use but only applied in a specific area in practice. A programming language that is well suited for a problem, whether it be general-purpose language or DSL, should minimize the level of detail required while still being expressive enough in the problem domain. As the name suggests, general-purpose language is "general" in that it cannot provide support for domain-specific notation while DSLs can be designed in diverse problem domains to handle this problem. General-purpose languages are preferred to DSLs when an application domain is not well understood enough to warrant its own language. In this case, a general-purpose language with an appropriate library of data types and functions for the domain may be used instead. While DSLs are usually smaller than GPL in that they offer a smaller range of notations of abstractions, some DSLs actually contain an entire GPL as a sublanguage. In these instances, the DSLs are able to offer domain-specific expressive power along with the expressive power of GPL. General Purpose programming languages are all Turing complete, meaning that they can theoretically solve any computational problem. Domain-specific languages are often similarly Turing complete but are not exclusively so. === Advantages and disadvantages === General-purpose programming languages are more commonly used by programmers. According to a study, C, Python, and Java were the most commonly used programming languages in 2021. One argument in favor of using general-purpose programming languages over domain-specific languages is that more people will be familiar with these languages, overcoming the need to learn a new language. Additionally, for many tasks (e.g., statistical analysis, machine learning, etc.) there are libraries that are extensively tested and optimized. Theoretically, the presence of these libraries should bridge the gap between general-purpose and domain-specific languages. An empirical study in 2010 sought to measure problem-solving and productivity between GPLs and DSLs by giving users problems who were familiar with the GPL (C#) and unfamiliar with the DSL (XAML). Ultimately, users of this specific domain-specific language performed better by a factor of 15%, even though they were more familiar with GPL, warranting further research. == Examples == === C === The predecessor to C, B, was developed largely for a specific purpose: systems programming. By contrast, C has found use in a variety of computational domains, such as operating systems, device drivers, application software, and embedded systems. C is suitable for use in a variety of areas because of its generality. It provides economy of expression, flow control, data structures, and a rich set of operators, but does not constrain its users to use it in any one context. As a result, though it was first used by its creators to rewrite the kernel of the Unix operating system, it was easily adapted for use in application development, embedded systems (e.g., microprocessor programming), video games (e.g., Doom), and so on. Today, C remains one of the most popular and widely-used programming languages. === C++ === Conceived as an extension to C, C++ introduced object-oriented features, as well as other conveniences like references, operator overloading, and default arguments. Like C, C++'s generality allowed it to be used in a wide range of areas. While its C++'s core area of application is in systems programming (because of C++'s ability to grant access to low-level architecture), it has been used extensively to build desktop applications, video games, databases, financial systems, and much more. Major software and finance companies, such as Microsoft, Apple, Bloomberg, and Morgan Stanley, still widely use C++ in their internal and external applications. === Python === Python was conceived as a language that emphasized code readability and extensibility. The former allowed non-software engineers to easily learn and write computer programs, while the latter allowed domain specialists to easily create libraries suited to their own use cases. For these reasons, Python has been used across a wide range of domains. Some areas where Python is used include: Web development – Frameworks like Django and Flask have allowed web developers to create robust web servers that can also exploit the wider Python ecosystem. Science and academia – Scientific and data libraries, like SciPy and Pandas, have enabled Python's use in scientific research. Machine learning – Libraries like scikit-learn and TensorFlow have made machine learning more accessible to developers. General software development – Developing user applications, web scraping programs, games, and other general software. == List == The following are some general-purpose programming languages: == See also == General-purpose markup language General-purpose modeling language == References ==
https://en.wikipedia.org/wiki/General-purpose_programming_language
Mojo is a programming language in the Python family that is currently under development. It is available both in browsers via Jupyter notebooks, and locally on Linux and macOS. Mojo aims to combine the usability of a high-level programming language, specifically Python, with the performance of a system programming language such as C++, Rust, and Zig. As of February 2025, the Mojo compiler is closed source with an open source standard library. Modular, the company behind Mojo, has stated an intent to eventually open source the Mojo language, as it matures. Mojo builds on the Multi-Level Intermediate Representation (MLIR) compiler software framework, instead of directly on the lower level LLVM compiler framework like many languages such as Julia, Swift, Clang, and Rust. MLIR is a newer compiler framework that allows Mojo to exploit higher level compiler passes unavailable in LLVM alone, and allows Mojo to compile down and target more than only central processing units (CPUs), including producing code that can run on graphics processing units (GPUs), Tensor Processing Units (TPUs), application-specific integrated circuits (ASICs) and other accelerators. It can also often more effectively use certain types of CPU optimizations directly, like single instruction, multiple data (SIMD) with minor intervention by a developer, as occurs in many other languages. According to Jeremy Howard of fast.ai, Mojo can be seen as "syntax sugar for MLIR" and for that reason Mojo is well optimized for applications like artificial intelligence (AI). == Origin and development history == The Mojo programming language was created by Modular Inc, which was founded by Chris Lattner, the original architect of the Swift programming language and LLVM, and Tim Davis, a former Google employee. Intention behind Mojo is to bridge the gap between Python’s ease of use and the fast performance required for cutting-edge AI applications. According to public change logs, Mojo development goes back to 2022. In May of 2023, the first publicly testable version was made available online via a hosted playground. By September 2023 Mojo was available for local download for Linux and by October 2023 it was also made available for download on Apple's macOS. In March of 2024, Modular open sourced the Mojo standard library and started accepting community contributions under the Apache 2.0 license. == Features == Mojo was created for an easy transition from Python. The language has syntax similar to Python's, with inferred static typing, and allows users to import Python modules. It uses LLVM and MLIR as its compilation backend. The language also intends to add a foreign function interface to call C/C++ and Python code. The language is not source-compatible with Python 3, only providing a subset of its syntax, e.g. missing the global keyword, list and dictionary comprehensions, and support for classes. Further, Mojo also adds features that enable performant low-level programming: fn for creating typed, compiled functions and "struct" for memory-optimized alternatives to classes. Mojo structs support methods, fields, operator overloading, and decorators. The language also provides a borrow checker, an influence from Rust. Mojo def functions use value semantics by default (functions receive a copy of all arguments and any modifications are not visible outside the function), while Python functions use reference semantics (functions receive a reference on their arguments and any modification of a mutable argument inside the function is visible outside). The language is not open source, but it is planned to be made open source in the future. == Programming examples == In Mojo, functions can be declared using both fn (for performant functions) or def (for Python compatibility). Basic arithmetic operations in Mojo with a def function: and with an fn function: The manner in which Mojo employs var and let for mutable and immutable variable declarations respectively mirrors the syntax found in Swift. In Swift, var is used for mutable variables, while let is designated for constants or immutable variables. Variable declaration and usage in Mojo: == Usage == The Mojo SDK allows Mojo programmers to compile and execute Mojo source files locally from a command-line interface and currently supports Ubuntu and macOS. Additionally, there is a Mojo extension for Visual Studio Code which provides code completion and tooltips. In January 2024, an inference model of LLaMA2 written in Mojo was released to the public. == See also == List of programming languages for artificial intelligence == References == == External links == Official website Mojo manual mojo on GitHub All about mojo programming language Mojo may be the biggest programming language advance in decades Mojo: The Future of AI Programming
https://en.wikipedia.org/wiki/Mojo_(programming_language)
In functional programming, monads are a way to structure computations as a sequence of steps, where each step not only produces a value but also some extra information about the computation, such as a potential failure, non-determinism, or side effect. More formally, a monad is a type constructor M equipped with two operations, return : <A>(a : A) -> M(A) which lifts a value into the monadic context, and bind : <A,B>(m_a : M(A), f : A -> M(B)) -> M(B) which chains monadic computations. In simpler terms, monads can be thought of as interfaces implemented on type constructors, that allow for functions to abstract over various type constructor variants that implement monad (e.g. Option, List, etc.). Both the concept of a monad and the term originally come from category theory, where a monad is defined as an endofunctor with additional structure. Research beginning in the late 1980s and early 1990s established that monads could bring seemingly disparate computer-science problems under a unified, functional model. Category theory also provides a few formal requirements, known as the monad laws, which should be satisfied by any monad and can be used to verify monadic code. Since monads make semantics explicit for a kind of computation, they can also be used to implement convenient language features. Some languages, such as Haskell, even offer pre-built definitions in their core libraries for the general monad structure and common instances. == Overview == "For a monad m, a value of type m a represents having access to a value of type a within the context of the monad." —C. A. McCann More exactly, a monad can be used where unrestricted access to a value is inappropriate for reasons specific to the scenario. In the case of the Maybe monad, it is because the value may not exist. In the case of the IO monad, it is because the value may not be known yet, such as when the monad represents user input that will only be provided after a prompt is displayed. In all cases the scenarios in which access makes sense are captured by the bind operation defined for the monad; for the Maybe monad a value is bound only if it exists, and for the IO monad a value is bound only after the previous operations in the sequence have been performed. A monad can be created by defining a type constructor M and two operations: return :: a -> M a (often also called unit), which receives a value of type a and wraps it into a monadic value of type M a, and bind :: (M a) -> (a -> M b) -> (M b) (typically represented as >>=), which receives a monadic value of type M a and a function f that accepts values of the base type a. Bind unwraps a, applies f to it, and can process the result of f as a monadic value M b. (An alternative but equivalent construct using the join function instead of the bind operator can be found in the later section § Derivation from functors.) With these elements, the programmer composes a sequence of function calls (a "pipeline") with several bind operators chained together in an expression. Each function call transforms its input plain-type value, and the bind operator handles the returned monadic value, which is fed into the next step in the sequence. Typically, the bind operator >>= may contain code unique to the monad that performs additional computation steps not available in the function received as a parameter. Between each pair of composed function calls, the bind operator can inject into the monadic value m a some additional information that is not accessible within the function f, and pass it along down the pipeline. It can also exert finer control of the flow of execution, for example by calling the function only under some conditions, or executing the function calls in a particular order. === An example: Maybe === One example of a monad is the Maybe type. Undefined null results are one particular pain point that many procedural languages don't provide specific tools for dealing with, requiring use of the null object pattern or checks to test for invalid values at each operation to handle undefined values. This causes bugs and makes it harder to build robust software that gracefully handles errors. The Maybe type forces the programmer to deal with these potentially undefined results by explicitly defining the two states of a result: Just ⌑result⌑, or Nothing. For example the programmer might be constructing a parser, which is to return an intermediate result, or else signal a condition which the parser has detected, and which the programmer must also handle. With just a little extra functional spice on top, this Maybe type transforms into a fully-featured monad.: 12.3 pages 148–151  In most languages, the Maybe monad is also known as an option type, which is just a type that marks whether or not it contains a value. Typically they are expressed as some kind of enumerated type. In the Rust programming language it is called Option<T> and variants of this type can either be a value of generic type T, or the empty variant: None. Option<T> can also be understood as a "wrapping" type, and this is where its connection to monads comes in. In languages with some form of the Maybe type, there are functions that aid in their use such as composing monadic functions with each other and testing if a Maybe contains a value. In the following hard-coded example, a Maybe type is used as a result of functions that may fail, in this case the type returns nothing if there is a divide-by-zero.One such way to test whether or not a Maybe contains a value is to use if statements.Other languages may have pattern matchingMonads can compose functions that return Maybe, putting them together. A concrete example might have one function take in several Maybe parameters, and return a single Maybe whose value is Nothing when any of the parameters are Nothing, as in the following: Instead of repeating Some expressions, we can use something called a bind operator. (also known as "map", "flatmap", or "shove": 2205s ). This operation takes a monad and a function that returns a monad and runs the function on the inner value of the passed monad, returning the monad from the function.In Haskell, there is an operator bind, or (>>=) that allows for this monadic composition in a more elegant form similar to function composition.: 150–151  With >>= available, chainable_division can be expressed much more succinctly with the help of anonymous functions (i.e. lambdas). Notice in the expression below how the two nested lambdas each operate on the wrapped value in the passed Maybe monad using the bind operator.: 93  What has been shown so far is basically a monad, but to be more concise, the following is a strict list of qualities necessary for a monad as defined by the following section. Monadic Type A type (Maybe): 148–151  Unit operation A type converter (Just(x)): 93  Bind operation A combinator for monadic functions ( >>= or .flatMap()): 150–151  These are the 3 things necessary to form a monad. Other monads may embody different logical processes, and some may have additional properties, but all of them will have these three similar components. === Definition === The more common definition for a monad in functional programming, used in the above example, is actually based on a Kleisli triple ⟨T, η, μ⟩ rather than category theory's standard definition. The two constructs turn out to be mathematically equivalent, however, so either definition will yield a valid monad. Given any well-defined basic types T and U, a monad consists of three parts: A type constructor M that builds up a monadic type M T A type converter, often called unit or return, that embeds an object x in the monad: A combinator, typically called bind (as in binding a variable) and represented with an infix operator >>= or a method called flatMap, that unwraps a monadic variable, then inserts it into a monadic function/expression, resulting in a new monadic value: To fully qualify as a monad though, these three parts must also respect a few laws: unit is a left-identity for bind: unit is also a right-identity for bind: bind is essentially associative: Algebraically, this means any monad both gives rise to a category (called the Kleisli category) and a monoid in the category of functors (from values to computations), with monadic composition as a binary operator in the monoid: 2450s  and unit as identity in the monoid. === Usage === The value of the monad pattern goes beyond merely condensing code and providing a link to mathematical reasoning. Whatever language or default programming paradigm a developer uses, following the monad pattern brings many of the benefits of purely functional programming. By reifying a specific kind of computation, a monad not only encapsulates the tedious details of that computational pattern, but it does so in a declarative way, improving the code's clarity. As monadic values explicitly represent not only computed values, but computed effects, a monadic expression can be replaced with its value in referentially transparent positions, much like pure expressions can be, allowing for many techniques and optimizations based on rewriting. Typically, programmers will use bind to chain monadic functions into a sequence, which has led some to describe monads as "programmable semicolons", a reference to how many imperative languages use semicolons to separate statements. However, monads do not actually order computations; even in languages that use them as central features, simpler function composition can arrange steps within a program. A monad's general utility rather lies in simplifying a program's structure and improving separation of concerns through abstraction. The monad structure can also be seen as a uniquely mathematical and compile time variation on the decorator pattern. Some monads can pass along extra data that is inaccessible to functions, and some even exert finer control over execution, for example only calling a function under certain conditions. Because they let application programmers implement domain logic while offloading boilerplate code onto pre-developed modules, monads can even be considered a tool for aspect-oriented programming. One other noteworthy use for monads is isolating side-effects, like input/output or mutable state, in otherwise purely functional code. Even purely functional languages can still implement these "impure" computations without monads, via an intricate mix of function composition and continuation-passing style (CPS) in particular. With monads though, much of this scaffolding can be abstracted away, essentially by taking each recurring pattern in CPS code and bundling it into a distinct monad. If a language does not support monads by default, it is still possible to implement the pattern, often without much difficulty. When translated from category-theory to programming terms, the monad structure is a generic concept and can be defined directly in any language that supports an equivalent feature for bounded polymorphism. A concept's ability to remain agnostic about operational details while working on underlying types is powerful, but the unique features and stringent behavior of monads set them apart from other concepts. == Applications == Discussions of specific monads will typically focus on solving a narrow implementation problem since a given monad represents a specific computational form. In some situations though, an application can even meet its high-level goals by using appropriate monads within its core logic. Here are just a few applications that have monads at the heart of their designs: The Parsec parser library uses monads to combine simpler parsing rules into more complex ones, and is particularly useful for smaller domain-specific languages. xmonad is a tiling window manager centered on the zipper data structure, which itself can be treated monadically as a specific case of delimited continuations. LINQ by Microsoft provides a query language for the .NET Framework that is heavily influenced by functional programming concepts, including core operators for composing queries monadically. ZipperFS is a simple, experimental file system that also uses the zipper structure primarily to implement its features. The Reactive extensions framework essentially provides a (co)monadic interface to data streams that realizes the observer pattern. == History == The term "monad" in programming dates to the APL and J programming languages, which do tend toward being purely functional. However, in those languages, "monad" is only shorthand for a function taking one parameter (a function with two parameters being a "dyad", and so on). The mathematician Roger Godement was the first to formulate the concept of a monad (dubbing it a "standard construction") in the late 1950s, though the term "monad" that came to dominate was popularized by category-theorist Saunders Mac Lane. The form defined above using bind, however, was originally described in 1965 by mathematician Heinrich Kleisli in order to prove that any monad could be characterized as an adjunction between two (covariant) functors. Starting in the 1980s, a vague notion of the monad pattern began to surface in the computer science community. According to programming language researcher Philip Wadler, computer scientist John C. Reynolds anticipated several facets of it in the 1970s and early 1980s, when he discussed the value of continuation-passing style, of category theory as a rich source for formal semantics, and of the type distinction between values and computations. The research language Opal, which was actively designed up until 1990, also effectively based I/O on a monadic type, but the connection was not realized at the time. The computer scientist Eugenio Moggi was the first to explicitly link the monad of category theory to functional programming, in a conference paper in 1989, followed by a more refined journal submission in 1991. In earlier work, several computer scientists had advanced using category theory to provide semantics for the lambda calculus. Moggi's key insight was that a real-world program is not just a function from values to other values, but rather a transformation that forms computations on those values. When formalized in category-theoretic terms, this leads to the conclusion that monads are the structure to represent these computations. Several others popularized and built on this idea, including Philip Wadler and Simon Peyton Jones, both of whom were involved in the specification of Haskell. In particular, Haskell used a problematic "lazy stream" model up through v1.2 to reconcile I/O with lazy evaluation, until switching over to a more flexible monadic interface. The Haskell community would go on to apply monads to many problems in functional programming, and in the 2010s, researchers working with Haskell eventually recognized that monads are applicative functors; and that both monads and arrows are monoids. At first, programming with monads was largely confined to Haskell and its derivatives, but as functional programming has influenced other paradigms, many languages have incorporated a monad pattern (in spirit if not in name). Formulations now exist in Scheme, Perl, Python, Racket, Clojure, Scala, F#, and have also been considered for a new ML standard. == Analysis == One benefit of the monad pattern is bringing mathematical precision on the composition of computations. Not only can the monad laws be used to check an instance's validity, but features from related structures (like functors) can be used through subtyping. === Verifying the monad laws === Returning to the Maybe example, its components were declared to make up a monad, but no proof was given that it satisfies the monad laws. This can be rectified by plugging the specifics of Maybe into one side of the general laws, then algebraically building a chain of equalities to reach the other side: Law 1: eta(a) >>= f(x) ⇔ (Just a) >>= f(x) ⇔ f(a) Law 2: ma >>= eta(x) ⇔ ma if ma is (Just a) then eta(a) ⇔ Just a else or Nothing ⇔ Nothing end if Law 3: (ma >>= f(x)) >>= g(y) ⇔ ma >>= (f(x) >>= g(y)) if (ma >>= f(x)) is (Just b) then if ma is (Just a) then g(ma >>= f(x)) (f(x) >>= g(y)) a else else Nothing Nothing end if end if ⇔ if ma is (Just a) and f(a) is (Just b) then (g ∘ f) a else if ma is (Just a) and f(a) is Nothing then Nothing else Nothing end if === Derivation from functors === Though rarer in computer science, one can use category theory directly, which defines a monad as a functor with two additional natural transformations. So to begin, a structure requires a higher-order function (or "functional") named map to qualify as a functor: This is not always a major issue, however, especially when a monad is derived from a pre-existing functor, whereupon the monad inherits map automatically. (For historical reasons, this map is instead called fmap in Haskell.) A monad's first transformation is actually the same unit from the Kleisli triple, but following the hierarchy of structures closely, it turns out unit characterizes an applicative functor, an intermediate structure between a monad and a basic functor. In the applicative context, unit is sometimes referred to as pure but is still the same function. What does differ in this construction is the law unit must satisfy; as bind is not defined, the constraint is given in terms of map instead: The final leap from applicative functor to monad comes with the second transformation, the join function (in category theory this is a natural transformation usually called μ), which "flattens" nested applications of the monad: As the characteristic function, join must also satisfy three variations on the monad laws: Regardless of whether a developer defines a direct monad or a Kleisli triple, the underlying structure will be the same, and the forms can be derived from each other easily: === Another example: List === The List monad naturally demonstrates how deriving a monad from a simpler functor can come in handy. In many languages, a list structure comes pre-defined along with some basic features, so a List type constructor and append operator (represented with ++ for infix notation) are assumed as already given here. Embedding a plain value in a list is also trivial in most languages: unit(x) = [x] From here, applying a function iteratively with a list comprehension may seem like an easy choice for bind and converting lists to a full monad. The difficulty with this approach is that bind expects monadic functions, which in this case will output lists themselves; as more functions are applied, layers of nested lists will accumulate, requiring more than a basic comprehension. However, a procedure to apply any simple function over the whole list, in other words map, is straightforward: (map φ) xlist = [ φ(x1), φ(x2), ..., φ(xn) ] Now, these two procedures already promote List to an applicative functor. To fully qualify as a monad, only a correct notion of join to flatten repeated structure is needed, but for lists, that just means unwrapping an outer list to append the inner ones that contain values: join(xlistlist) = join([xlist1, xlist2, ..., xlistn]) = xlist1 ++ xlist2 ++ ... ++ xlistn The resulting monad is not only a list, but one that automatically resizes and condenses itself as functions are applied. bind can now also be derived with just a formula, then used to feed List values through a pipeline of monadic functions: (xlist >>= f) = join ∘ (map f) xlist One application for this monadic list is representing nondeterministic computation. List can hold results for all execution paths in an algorithm, then condense itself at each step to "forget" which paths led to which results (a sometimes important distinction from deterministic, exhaustive algorithms). Another benefit is that checks can be embedded in the monad; specific paths can be pruned transparently at their first point of failure, with no need to rewrite functions in the pipeline. A second situation where List shines is composing multivalued functions. For instance, the nth complex root of a number should yield n distinct complex numbers, but if another mth root is then taken of those results, the final m•n values should be identical to the output of the m•nth root. List completely automates this issue away, condensing the results from each step into a flat, mathematically correct list. == Techniques == Monads present opportunities for interesting techniques beyond just organizing program logic. Monads can lay the groundwork for useful syntactic features while their high-level and mathematical nature enable significant abstraction. === Syntactic sugar do-notation === Although using bind openly often makes sense, many programmers prefer a syntax that mimics imperative statements (called do-notation in Haskell, perform-notation in OCaml, computation expressions in F#, and for comprehension in Scala). This is only syntactic sugar that disguises a monadic pipeline as a code block; the compiler will then quietly translate these expressions into underlying functional code. Translating the add function from the Maybe into Haskell can show this feature in action. A non-monadic version of add in Haskell looks like this: In monadic Haskell, return is the standard name for unit, plus lambda expressions must be handled explicitly, but even with these technicalities, the Maybe monad makes for a cleaner definition: With do-notation though, this can be distilled even further into a very intuitive sequence: A second example shows how Maybe can be used in an entirely different language: F#. With computation expressions, a "safe division" function that returns None for an undefined operand or division by zero can be written as: At build-time, the compiler will internally "de-sugar" this function into a denser chain of bind calls: For a last example, even the general monad laws themselves can be expressed in do-notation: === General interface === Every monad needs a specific implementation that meets the monad laws, but other aspects like the relation to other structures or standard idioms within a language are shared by all monads. As a result, a language or library may provide a general Monad interface with function prototypes, subtyping relationships, and other general facts. Besides providing a head-start to development and guaranteeing a new monad inherits features from a supertype (such as functors), checking a monad's design against the interface adds another layer of quality control. === Operators === Monadic code can often be simplified even further through the judicious use of operators. The map functional can be especially helpful since it works on more than just ad-hoc monadic functions; so long as a monadic function should work analogously to a predefined operator, map can be used to instantly "lift" the simpler operator into a monadic one. With this technique, the definition of add from the Maybe example could be distilled into: add(mx,my) = map (+) The process could be taken even one step further by defining add not just for Maybe, but for the whole Monad interface. By doing this, any new monad that matches the structure interface and implements its own map will immediately inherit a lifted version of add too. The only change to the function needed is generalizing the type signature: add : (Monad Number, Monad Number) → Monad Number Another monadic operator that is also useful for analysis is monadic composition (represented as infix >=> here), which allows chaining monadic functions in a more mathematical style: (f >=> g)(x) = f(x) >>= g With this operator, the monad laws can be written in terms of functions alone, highlighting the correspondence to associativity and existence of an identity: (unit >=> g) ↔ g (f >=> unit) ↔ f (f >=> g) >=> h ↔ f >=> (g >=> h) In turn, the above shows the meaning of the "do" block in Haskell: do _p <- f(x) _q <- g(_p) h(_q) ↔ ( f >=> g >=> h )(x) == More examples == === Identity monad === The simplest monad is the Identity monad, which just annotates plain values and functions to satisfy the monad laws: newtype Id T = T unit(x) = x (x >>= f) = f(x) Identity does actually have valid uses though, such as providing a base case for recursive monad transformers. It can also be used to perform basic variable assignment within an imperative-style block. === Collections === Any collection with a proper append is already a monoid, but it turns out that List is not the only collection that also has a well-defined join and qualifies as a monad. One can even mutate List into these other monadic collections by simply imposing special properties on append: === IO monad (Haskell) === As already mentioned, pure code should not have unmanaged side effects, but that does not preclude a program from explicitly describing and managing effects. This idea is central to Haskell's IO monad, where an object of type IO a can be seen as describing an action to be performed in the world, optionally providing information about the world of type a. An action that provides no information about the world has the type IO (), "providing" the dummy value (). When a programmer binds an IO value to a function, the function computes the next action to be performed based on the information about the world provided by the previous action (input from users, files, etc.). Most significantly, because the value of the IO monad can only be bound to a function that computes another IO monad, the bind function imposes a discipline of a sequence of actions where the result of an action can only be provided to a function that will compute the next action to perform. This means that actions which do not need to be performed never are, and actions that do need to be performed have a well defined sequence. For example, Haskell has several functions for acting on the wider file system, including one that checks whether a file exists and another that deletes a file. Their two type signatures are: The first is interested in whether a given file really exists, and as a result, outputs a Boolean value within the IO monad. The second function, on the other hand, is only concerned with acting on the file system so the IO container it outputs is empty. IO is not limited just to file I/O though; it even allows for user I/O, and along with imperative syntax sugar, can mimic a typical "Hello, World!" program: Desugared, this translates into the following monadic pipeline (>> in Haskell is just a variant of bind for when only monadic effects matter and the underlying result can be discarded): === Writer monad (JavaScript) === Another common situation is keeping a log file or otherwise reporting a program's progress. Sometimes, a programmer may want to log even more specific, technical data for later profiling or debugging. The Writer monad can handle these tasks by generating auxiliary output that accumulates step-by-step. To show how the monad pattern is not restricted to primarily functional languages, this example implements a Writer monad in JavaScript. First, an array (with nested tails) allows constructing the Writer type as a linked list. The underlying output value will live in position 0 of the array, and position 1 will implicitly hold a chain of auxiliary notes: Defining unit is also very simple: Only unit is needed to define simple functions that output Writer objects with debugging notes: A true monad still requires bind, but for Writer, this amounts simply to concatenating a function's output to the monad's linked list: The sample functions can now be chained together using bind, but defining a version of monadic composition (called pipelog here) allows applying these functions even more succinctly: The final result is a clean separation of concerns between stepping through computations and logging them to audit later: === Environment monad === An environment monad (also called a reader monad and a function monad) allows a computation to depend on values from a shared environment. The monad type constructor maps a type T to functions of type E → T, where E is the type of the shared environment. The monad functions are: return : T → E → T = t ↦ e ↦ t bind : ( E → T ) → ( T → E → T ′ ) → E → T ′ = r ↦ f ↦ e ↦ f ( r e ) e {\displaystyle {\begin{array}{ll}{\text{return}}\colon &T\rightarrow E\rightarrow T=t\mapsto e\mapsto t\\{\text{bind}}\colon &(E\rightarrow T)\rightarrow (T\rightarrow E\rightarrow T')\rightarrow E\rightarrow T'=r\mapsto f\mapsto e\mapsto f\,(r\,e)\,e\end{array}}} The following monadic operations are useful: ask : E → E = id E local : ( E → E ) → ( E → T ) → E → T = f ↦ c ↦ e ↦ c ( f e ) {\displaystyle {\begin{array}{ll}{\text{ask}}\colon &E\rightarrow E={\text{id}}_{E}\\{\text{local}}\colon &(E\rightarrow E)\rightarrow (E\rightarrow T)\rightarrow E\rightarrow T=f\mapsto c\mapsto e\mapsto c\,(f\,e)\end{array}}} The ask operation is used to retrieve the current context, while local executes a computation in a modified subcontext. As in a state monad, computations in the environment monad may be invoked by simply providing an environment value and applying it to an instance of the monad. Formally, a value in an environment monad is equivalent to a function with an additional, anonymous argument; return and bind are equivalent to the K and S combinators, respectively, in the SKI combinator calculus. === State monads === A state monad allows a programmer to attach state information of any type to a calculation. Given any value type, the corresponding type in the state monad is a function which accepts a state, then outputs a new state (of type s) along with a return value (of type t). This is similar to an environment monad, except that it also returns a new state, and thus allows modeling a mutable environment. Note that this monad takes a type parameter, the type of the state information. The monad operations are defined as follows: Useful state operations include: Another operation applies a state monad to a given initial state: do-blocks in a state monad are sequences of operations that can examine and update the state data. Informally, a state monad of state type S maps the type of return values T into functions of type S → T × S {\displaystyle S\rightarrow T\times S} , where S is the underlying state. The return and bind function are: return : T → S → T × S = t ↦ s ↦ ( t , s ) bind : ( S → T × S ) → ( T → S → T ′ × S ) → S → T ′ × S = m ↦ k ↦ s ↦ ( k t s ′ ) where ( t , s ′ ) = m s {\displaystyle {\begin{array}{ll}{\text{return}}\colon &T\rightarrow S\rightarrow T\times S=t\mapsto s\mapsto (t,s)\\{\text{bind}}\colon &(S\rightarrow T\times S)\rightarrow (T\rightarrow S\rightarrow T'\times S)\rightarrow S\rightarrow T'\times S\ =m\mapsto k\mapsto s\mapsto (k\ t\ s')\quad {\text{where}}\;(t,s')=m\,s\end{array}}} . From the category theory point of view, a state monad is derived from the adjunction between the product functor and the exponential functor, which exists in any cartesian closed category by definition. === Continuation monad === A continuation monad with return type R maps type T into functions of type ( T → R ) → R {\displaystyle \left(T\rightarrow R\right)\rightarrow R} . It is used to model continuation-passing style. The return and bind functions are as follows: return : T → ( T → R ) → R = t ↦ f ↦ f t bind : ( ( T → R ) → R ) → ( T → ( T ′ → R ) → R ) → ( T ′ → R ) → R = c ↦ f ↦ k ↦ c ( t ↦ f t k ) {\displaystyle {\begin{array}{ll}{\text{return}}\colon &T\rightarrow \left(T\rightarrow R\right)\rightarrow R=t\mapsto f\mapsto f\,t\\{\text{bind}}\colon &\left(\left(T\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow \left(T'\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T'\rightarrow R\right)\rightarrow R=c\mapsto f\mapsto k\mapsto c\,\left(t\mapsto f\,t\,k\right)\end{array}}} The call-with-current-continuation function is defined as follows: call/cc : ( ( T → ( T ′ → R ) → R ) → ( T → R ) → R ) → ( T → R ) → R = f ↦ k ↦ ( f ( t ↦ x ↦ k t ) k ) {\displaystyle {\text{call/cc}}\colon \ \left(\left(T\rightarrow \left(T'\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow R\right)\rightarrow R=f\mapsto k\mapsto \left(f\left(t\mapsto x\mapsto k\,t\right)\,k\right)} === Program logging === The following code is pseudocode. Suppose we have two functions foo and bar, with types That is, both functions take in an integer and return another integer. Then we can apply the functions in succession like so: Where the result is the result of foo applied to the result of bar applied to x. But suppose we are debugging our program, and we would like to add logging messages to foo and bar. So we change the types as so: So that both functions return a tuple, with the result of the application as the integer, and a logging message with information about the applied function and all the previously applied functions as the string. Unfortunately, this means we can no longer compose foo and bar, as their input type int is not compatible with their output type int * string. And although we can again gain composability by modifying the types of each function to be int * string -> int * string, this would require us to add boilerplate code to each function to extract the integer from the tuple, which would get tedious as the number of such functions increases. Instead, let us define a helper function to abstract away this boilerplate for us: bind takes in an integer and string tuple, then takes in a function (like foo) that maps from an integer to an integer and string tuple. Its output is an integer and string tuple, which is the result of applying the input function to the integer within the input integer and string tuple. In this way, we only need to write boilerplate code to extract the integer from the tuple once, in bind. Now we have regained some composability. For example: Where (x,s) is an integer and string tuple. To make the benefits even clearer, let us define an infix operator as an alias for bind: So that t >>= f is the same as bind t f. Then the above example becomes: Finally, we define a new function to avoid writing (x, "") every time we wish to create an empty logging message, where "" is the empty string. Which wraps x in the tuple described above. The result is a pipeline for logging messages: That allows us to more easily log the effects of bar and foo on x. int * string denotes a pseudo-coded monadic value. bind and return are analogous to the corresponding functions of the same name. In fact, int * string, bind, and return form a monad. === Additive monads === An additive monad is a monad endowed with an additional closed, associative, binary operator mplus and an identity element under mplus, called mzero. The Maybe monad can be considered additive, with Nothing as mzero and a variation on the OR operator as mplus. List is also an additive monad, with the empty list [] acting as mzero and the concatenation operator ++ as mplus. Intuitively, mzero represents a monadic wrapper with no value from an underlying type, but is also considered a "zero" (rather than a "one") since it acts as an absorber for bind, returning mzero whenever bound to a monadic function. This property is two-sided, and bind will also return mzero when any value is bound to a monadic zero function. In category-theoretic terms, an additive monad qualifies once as a monoid over monadic functions with bind (as all monads do), and again over monadic values via mplus. === Free monads === Sometimes, the general outline of a monad may be useful, but no simple pattern recommends one monad or another. This is where a free monad comes in; as a free object in the category of monads, it can represent monadic structure without any specific constraints beyond the monad laws themselves. Just as a free monoid concatenates elements without evaluation, a free monad allows chaining computations with markers to satisfy the type system, but otherwise imposes no deeper semantics itself. For example, by working entirely through the Just and Nothing markers, the Maybe monad is in fact a free monad. The List monad, on the other hand, is not a free monad since it brings extra, specific facts about lists (like append) into its definition. One last example is an abstract free monad: Free monads, however, are not restricted to a linked-list like in this example, and can be built around other structures like trees. Using free monads intentionally may seem impractical at first, but their formal nature is particularly well-suited for syntactic problems. A free monad can be used to track syntax and type while leaving semantics for later, and has found use in parsers and interpreters as a result. Others have applied them to more dynamic, operational problems too, such as providing iteratees within a language. === Comonads === Besides generating monads with extra properties, for any given monad, one can also define a comonad. Conceptually, if monads represent computations built up from underlying values, then comonads can be seen as reductions back down to values. Monadic code, in a sense, cannot be fully "unpacked"; once a value is wrapped within a monad, it remains quarantined there along with any side-effects (a good thing in purely functional programming). Sometimes though, a problem is more about consuming contextual data, which comonads can model explicitly. Technically, a comonad is the categorical dual of a monad, which loosely means that it will have the same required components, only with the direction of the type signatures reversed. Starting from the bind-centric monad definition, a comonad consists of: A type constructor W that marks the higher-order type W T The dual of unit, called counit here, extracts the underlying value from the comonad: counit(wa) : W T → T A reversal of bind (also represented with =>>) that extends a chain of reducing functions: (wa =>> f) : (W U, W U → T) → W T extend and counit must also satisfy duals of the monad laws: counit ∘ ( (wa =>> f) → wb ) ↔ f(wa) → b wa =>> counit ↔ wa wa ( (=>> f(wx = wa)) → wb (=>> g(wy = wb)) → wc ) ↔ ( wa (=>> f(wx = wa)) → wb ) (=>> g(wy = wb)) → wc Analogous to monads, comonads can also be derived from functors using a dual of join: duplicate takes an already comonadic value and wraps it in another layer of comonadic structure: duplicate(wa) : W T → W (W T) While operations like extend are reversed, however, a comonad does not reverse functions it acts on, and consequently, comonads are still functors with map, not cofunctors. The alternate definition with duplicate, counit, and map must also respect its own comonad laws: ((map duplicate) ∘ duplicate) wa ↔ (duplicate ∘ duplicate) wa ↔ wwwa ((map counit) ∘ duplicate) wa ↔ (counit ∘ duplicate) wa ↔ wa ((map map φ) ∘ duplicate) wa ↔ (duplicate ∘ (map φ)) wa ↔ wwb And as with monads, the two forms can be converted automatically: (map φ) wa ↔ wa =>> (φ ∘ counit) wx duplicate wa ↔ wa =>> wx wa =>> f(wx) ↔ ((map f) ∘ duplicate) wa A simple example is the Product comonad, which outputs values based on an input value and shared environment data. In fact, the Product comonad is just the dual of the Writer monad and effectively the same as the Reader monad (both discussed below). Product and Reader differ only in which function signatures they accept, and how they complement those functions by wrapping or unwrapping values. A less trivial example is the Stream comonad, which can be used to represent data streams and attach filters to the incoming signals with extend. In fact, while not as popular as monads, researchers have found comonads particularly useful for stream processing and modeling dataflow programming. Due to their strict definitions, however, one cannot simply move objects back and forth between monads and comonads. As an even higher abstraction, arrows can subsume both structures, but finding more granular ways to combine monadic and comonadic code is an active area of research. == See also == Alternatives for modeling computations: Effect systems (particularly algebraic effect handlers) are a different way to describe side effects as types Uniqueness types are a third approach to handling side-effects in functional languages Related design concepts: Aspect-oriented programming emphasizes separating out ancillary bookkeeping code to improve modularity and simplicity Inversion of control is the abstract principle of calling specific functions from an overarching framework Type classes are a specific language feature used to implement monads and other structures in Haskell The decorator pattern is a more concrete, ad-hoc way to achieve similar benefits in object-oriented programming Generalizations of monads: Applicative functors generalize from monads by keeping only unit and laws relating it to map Arrows use additional structure to bring plain functions and monads under a single interface Monad transformers act on distinct monads to combine them modularly == Notes == == References == == External links == HaskellWiki references: "All About Monads" (originally by Jeff Newbern) — A comprehensive discussion of all the common monads and how they work in Haskell; includes the "mechanized assembly line" analogy. "Typeclassopedia" (originally by Brent Yorgey) — A detailed exposition of how the leading typeclasses in Haskell, including monads, interrelate. Tutorials: "A Fistful of Monads" (from the online Haskell textbook Learn You a Haskell for Great Good! — A chapter introducing monads from the starting-point of functor and applicative functor typeclasses, including examples. "For a Few Monads More" — A second chapter explaining more details and examples, including a Probability monad for Markov chains. "Functors, Applicatives, And Monads In Pictures (by Aditya Bhargava) — A quick, humorous, and visual tutorial on monads. Interesting cases: "UNIX pipes as IO monads" (by Oleg Kiselyov) — A short essay explaining how Unix pipes are effectively monadic. Pro Scala: Monadic Design Patterns for the Web (by Gregory Meredith) — An unpublished, full-length manuscript on how to improve many facets of web development in Scala with monads.
https://en.wikipedia.org/wiki/Monad_(functional_programming)
In computing, a channel is a model for interprocess communication and synchronization via message passing. A message may be sent over a channel, and another process or thread is able to receive messages sent over a channel it has a reference to, as a stream. Different implementations of channels may be buffered or not, and either synchronous or asynchronous. == libthread channels == The multithreading library, libthread, which was first created for the operating system Plan 9, offers inter-thread communication based on fixed-size channels. == OCaml events == The OCaml event module offers typed channels for synchronization. When the module's send and receive functions are called, they create corresponding send and receive events which can be synchronized. == Examples == === Lua Love2D === The Love2D library which uses the Lua programming language implements channels with push and pop operations similar to stacks. The pop operation will block so as long as there is data resident on the stack. A demand operation is equivalent to pop, except it will block until there is data on the stack === XMOS XC === The XMOS programming language XC provides a primitive type "Chan" and two operators "<:" and ":>" for sending and receiving data from a channel. In this example, two hardware threads are started on the XMOS, running the two lines in the "par" block. The first line transmits the number 42 through the channel while the second waits until it is received and sets the value of x. The XC language also allows asynchronous receiving on channels through a select statement. === Go === This snippet of Go code performs similarly to the XC code. First the channel c is created, then a goroutine is spawned which sends 42 through the channel. When the number is put in the channel x is set to 42. Go allows channels to buffer contents, as well as non blocking receiving through the use of a select block. === Rust === Rust provides asynchronous channels for communication between threads. Channels allow a unidirectional flow of information between two endpoints: the Sender and the Receiver. == Applications == In addition to their fundamental use for interprocess communication, channels can be used as a primitive to implement various other concurrent programming constructs which can be realized as streams. For example, channels can be used to construct futures and promises, where a future is a one-element channel, and a promise is a process that sends to the channel, fulfilling the future. Similarly, iterators can be constructed directly from channels. == List of implementations == List of non-standard, library-based implementations of channels For Scala: CSO -- Communicating Scala Objects is a complete DSL for channel-based communication and concurrency whose semantic primitives are generalizations of the OCCAM primitives. CSO has been used since 2007 in the teaching of concurrent programming, and relevant lectures can be found with the ThreadCSO implementation. For C++: stlab This implementation supports splits, and different merge and zip operations. Different executors can be attached to the individual nodes. For Rust: Tokio == References == == External links == Libthread Channel Implementation Bell Labs and CSP Threads Limbo – Inferno Application Programming Stackless.com – Channels – OCaml Events
https://en.wikipedia.org/wiki/Channel_(programming)
A first-generation programming language (1GL) is a machine-level programming language and belongs to the low-level programming languages. A first generation (programming) language (1GL) is a grouping of programming languages that are machine level languages used to program first-generation computers. Originally, no translator was used to compile or assemble the first-generation language. The first-generation programming instructions were entered through the front panel switches of the computer system. The instructions in 1GL are made of binary numbers, represented by 1s and 0s. This makes the language suitable for the understanding of the machine but far more difficult to interpret and learn by the human programmer. The main advantage of programming in 1GL is that the code can run very fast and very efficiently, precisely because the instructions are executed directly by the central processing unit (CPU). One of the main disadvantages of programming in a low level language is that when an error occurs, the code is not as easy to fix. First generation languages are very much adapted to a specific computer and CPU, and code portability is therefore significantly reduced in comparison to higher level languages. Modern day programmers still occasionally use machine level code, especially when programming lower level functions of the system, such as drivers, interfaces with firmware and hardware devices. Modern tools such as native-code compilers are used to produce machine level from a higher-level language. == References == === General === 1. Nwankwogu S.E (2016). Programming Languages and their history.
https://en.wikipedia.org/wiki/First-generation_programming_language
A computer language is a formal language used to communicate with a computer. Types of computer languages include: Construction language – all forms of communication by which a human can specify an executable problem solution to a computer Command language – a language used to control the tasks of the computer itself, such as starting programs Configuration language – a language used to write configuration files Programming language – a formal language designed to communicate instructions to a machine, particularly a computer Scripting language – a type of programming language which typically is interpreted at runtime rather than being compiled Query language – a language used to make queries in databases and information systems Transformation language – designed to transform some input text in a certain formal language into a modified output text that meets some specific goal Data exchange language – a language that is domain-independent and can be used for data from any kind of discipline; examples: JSON, XML Markup language – a grammar for annotating a document in a way that is syntactically distinguishable from the text, such as HTML Modeling language – an artificial language used to express information or knowledge, often for use in computer system design Architecture description language – used as a language (or a conceptual model) to describe and represent system architectures Hardware description language – used to model integrated circuits Page description language – describes the appearance of a printed page in a higher level than an actual output bitmap Simulation language – a language used to describe simulations Specification language – a language used to describe what a system should do Style sheet language – a computer language that expresses the presentation of structured documents, such as CSS == See also == Serialization Domain-specific language – a language specialized to a particular application domain Expression language General-purpose language – a language that is broadly applicable across application domains and lacks specialized features for a particular domain Lists of programming languages Natural language processing – the use of computers to process text or speech in human language == External links == Media related to Computer languages at Wikimedia Commons
https://en.wikipedia.org/wiki/Computer_language
In computer programming, a callback is a function that is stored as data (a reference) and designed to be called by another function – often back to the original abstraction layer. A function that accepts a callback parameter may be designed to call back before returning to its caller which is known as synchronous or blocking. The function that accepts a callback may be designed to store the callback so that it can be called back after returning which is known as asynchronous, non-blocking or deferred. Programming languages support callbacks in different ways such as function pointers, lambda expressions and blocks. A callback can be likened to leaving instructions with a tailor for what to do when a suit is ready, such as calling a specific phone number or delivering it to a given address. These instructions represent a callback: a function provided in advance to be executed later, often by a different part of the system and not necessarily by the one that received it. The term callback can be misleading, as it does not necessarily imply a return to the original caller, unlike a telephone callback. Mesa programming language formalised the callback mechanism used in Programming Languages. By passing a procedure as a parameter, Mesa essentially delegated the execution of that procedure to a later point in time when a specific event occurred, similar to how callbacks are implemented in modern programming languages. == Use == A blocking callback runs in the execution context of the function that passes the callback. A deferred callback can run in a different context such as during interrupt or from a thread. As such, a deferred callback can be used for synchronization and delegating work to another thread. === Event handling === A callback can be used for event handling. Often, consuming code registers a callback for a particular type of event. When that event occurs, the callback is called. Callbacks are often used to program the graphical user interface (GUI) of a program that runs in a windowing system. The application supplies a reference to a custom callback function for the windowing system to call. The windowing system calls this function to notify the application of events like mouse clicks and key presses. === Asynchronous action === A callback can be used to implement asynchronous processing. A caller requests an action and provides a callback to be called when the action completes which might be long after the request is made. === Polymorphism === A callback can be used to implement polymorphism. In the following pseudocode, say_hi can take either write_status or write_error. == Implementation == The callback technology is implemented differently by programming language. In assembly, C, C++, Pascal, Modula2 and other languages, a callback function is stored internally as a function pointer. Using the same storage allows different languages to directly share callbacks without a design-time or runtime interoperability layer. For example, the Windows API is accessible via multiple languages, compilers and assemblers. C++ also allows objects to provide an implementation of the function call operation. The Standard Template Library accepts these objects (called functors) as parameters. Many dynamic languages, such as JavaScript, Lua, Python, Perl and PHP, allow a function object to be passed. CLI languages such as C# and VB.NET provide a type-safe encapsulating function reference known as delegate. Events and event handlers, as used in .NET languages, provide for callbacks. Functional languages generally support first-class functions, which can be passed as callbacks to other functions, stored as data or returned from functions. Many languages, including Perl, Python, Ruby, Smalltalk, C++ (11+), C# and VB.NET (new versions) and most functional languages, support lambda expressions, unnamed functions with inline syntax, that generally acts as callbacks.. In some languages, including Scheme, ML, JavaScript, Perl, Python, Smalltalk, PHP (since 5.3.0), C++ (11+), Java (since 8), and many others, a lambda can be a closure, i.e. can access variables locally defined in the context in which the lambda is defined. In an object-oriented programming language such as Java versions before function-valued arguments, the behavior of a callback can be achieved by passing an object that implements an interface. The methods of this object are callbacks. In PL/I and ALGOL 60 a callback procedure may need to be able to access local variables in containing blocks, so it is called through an entry variable containing both the entry point and context information. == Example code == === C === Callbacks have a wide variety of uses, for example in error signaling: a Unix program might not want to terminate immediately when it receives SIGTERM, so to make sure that its termination is handled properly, it would register the cleanup function as a callback. Callbacks may also be used to control whether a function acts or not: Xlib allows custom predicates to be specified to determine whether a program wishes to handle an event. In the following C code, function print_number uses parameter get_number as a blocking callback. print_number is called with get_answer_to_most_important_question which acts as a callback function. When run the output is: "Value: 42". === C++ === In C++, functor can be used in addition to function pointer. === C# === In the following C# code, method Helper.Method uses parameter callback as a blocking callback. Helper.Method is called with Log which acts as a callback function. When run, the following is written to the console: "Callback was: Hello world". === Kotlin === In the following Kotlin code, function askAndAnswer uses parameter getAnswer as a blocking callback. askAndAnswer is called with getAnswerToMostImportantQuestion which acts as a callback function. Running this will tell the user that the answer to their question is "42". === JavaScript === In the following JavaScript code, function calculate uses parameter operate as a blocking callback. calculate is called with multiply and then with sum which act as callback functions. The collection method .each() of the jQuery library uses the function passed to it as a blocking callback. It calls the callback for each item of the collection. For example: Deferred callbacks are commonly used for handling events from the user, the client and timers. Examples can be found in addEventListener, Ajax and XMLHttpRequest. In addition to using callbacks in JavaScript source code, C functions that take a function are supported via js-ctypes. === Red and REBOL === The following REBOL/Red code demonstrates callback use. As alert requires a string, form produces a string from the result of calculate The get-word! values (i.e., :calc-product and :calc-sum) trigger the interpreter to return the code of the function rather than evaluate with the function. The datatype! references in a block! [float! integer!] restrict the type of values passed as arguments. === Rust === Rust have the Fn, FnMut and FnOnce traits. === Lua === In this Lua code, function calculate accepts the operation parameter which is used as a blocking callback. calculate is called with both add and multiply, and then uses an anonymous function to divide. === Python === In the following Python code, function calculate accepts a parameter operate that is used as a blocking callback. calculate is called with square which acts as a callback function. === Julia === In the following Julia code, function calculate accepts a parameter operate that is used as a blocking callback. calculate is called with square which acts as a callback function. == See also == == References == == External links == Basic Instincts: Implementing Callback Notifications Using Delegates - MSDN Magazine, December 2002 Implement callback routines in Java Implement Script Callback Framework in ASP.NET 1.x - Code Project, 2 August 2004 Interfacing C++ member functions with C libraries (archived from the original on July 6, 2011) Style Case Study #2: Generic Callbacks
https://en.wikipedia.org/wiki/Callback_(computer_programming)
In computing, source code, or simply code or source, is a plain text computer program written in a programming language. A programmer writes the human readable source code to control the behavior of a computer. Since a computer, at base, only understands machine code, source code must be translated before a computer can execute it. The translation process can be implemented three ways. Source code can be converted into machine code by a compiler or an assembler. The resulting executable is machine code ready for the computer. Alternatively, source code can be executed without conversion via an interpreter. An interpreter loads the source code into memory. It simultaneously translates and executes each statement. A method that combines compilation and interpretation is to first produce bytecode. Bytecode is an intermediate representation of source code that is quickly interpreted. == Background == The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language (simple instructions that could be directly executed by the processor). Machine language was difficult to debug and was not portable between different computer systems. Initially, hardware resources were scarce and expensive, while human resources were cheaper. As programs grew more complex, programmer productivity became a bottleneck. This led to the introduction of high-level programming languages such as Fortran in the mid-1950s. These languages abstracted away the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. As instructions distinct from the underlying computer hardware, software is therefore relatively recent, dating to these early high-level programming languages such as Fortran, Lisp, and Cobol. The invention of high-level programming languages was simultaneous with the compilers needed to translate the source code automatically into machine code that can be directly executed on the computer hardware. Source code is the form of code that is modified directly by humans, typically in a high-level programming language. Object code can be directly executed by the machine and is generated automatically from the source code, often via an intermediate step, assembly language. While object code will only work on a specific platform, source code can be ported to a different machine and recompiled there. For the same source code, object code can vary significantly—not only based on the machine for which it is compiled, but also based on performance optimization from the compiler. == Organization == Most programs do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Software developers often use configuration management to track changes to source code files (version control). The configuration management system also keeps track of which object code file corresponds to which version of the source code file. == Purposes == === Estimation === The number of lines of source code is often used as a metric when evaluating the productivity of computer programmers, the economic value of a code base, effort estimation for projects in development, and the ongoing cost of software maintenance after release. === Communication === Source code is also used to communicate algorithms between people – e.g., code snippets online or in books. Computer programmers may find it helpful to review existing source code to learn about programming techniques. The sharing of source code between developers is frequently cited as a contributing factor to the maturation of their programming skills. Some people consider source code an expressive artistic medium. Source code often contains comments—blocks of text marked for the compiler to ignore. This content is not part of the program logic, but is instead intended to help readers understand the program. Companies often keep the source code confidential in order to hide algorithms considered a trade secret. Proprietary, secret source code and algorithms are widely used for sensitive government applications such as criminal justice, which results in black box behavior with a lack of transparency into the algorithm's methodology. The result is avoidance of public scrutiny of issues such as bias. === Modification === Access to the source code (not just the object code) is essential to modifying it. Understanding existing code is necessary to understand how it works and before modifying it. The rate of understanding depends both on the code base as well as the skill of the programmer. Experienced programmers have an easier time understanding what the code does at a high level. Software visualization is sometimes used to speed up this process. Many software programmers use an integrated development environment (IDE) to improve their productivity. IDEs typically have several features built in, including a source-code editor that can alert the programmer to common errors. Modification often includes code refactoring (improving the structure without changing functionality) and restructuring (improving structure and functionality at the same time). Nearly every change to code will introduce new bugs or unexpected ripple effects, which require another round of fixes. Code reviews by other developers are often used to scrutinize new code added to a project. The purpose of this phase is often to verify that the code meets style and maintainability standards and that it is a correct implementation of the software design. According to some estimates, code review dramatically reduce the number of bugs persisting after software testing is complete. Along with software testing that works by executing the code, static program analysis uses automated tools to detect problems with the source code. Many IDEs support code analysis tools, which might provide metrics on the clarity and maintainability of the code. Debuggers are tools that often enable programmers to step through execution while keeping track of which source code corresponds to each change of state. === Compilation and execution === Source code files in a high-level programming language must go through a stage of preprocessing into machine code before the instructions can be carried out. After being compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages. == Quality == Software quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Maintainability is the quality of software enabling it to be easily modified without breaking existing functionality. Following coding conventions such as using clear function and variable names that correspond to their purpose makes maintenance easier. Use of conditional loop statements only if the code could execute more than once, and eliminating code that will never execute can also increase understandability. Many software development organizations neglect maintainability during the development phase, even though it will increase long-term costs. Technical debt is incurred when programmers, often out of laziness or urgency to meet a deadline, choose quick and dirty solutions rather than build maintainability into their code. A common cause is underestimates in software development effort estimation, leading to insufficient resources allocated to development. A challenge with maintainability is that many software engineering courses do not emphasize it. Development engineers who know that they will not be responsible for maintaining the software do not have an incentive to build in maintainability. == Copyright and licensing == The situation varies worldwide, but in the United States before 1974, software and its source code was not copyrightable and therefore always public domain software. In 1974, the US Commission on New Technological Uses of Copyrighted Works (CONTU) decided that "computer programs, to the extent that they embody an author's original creation, are proper subject matter of copyright". Proprietary software is rarely distributed as source code. Although the term open-source software literally refers to public access to the source code, open-source software has additional requirements: free redistribution, permission to modify the source code and release derivative works under the same license, and nondiscrimination between different uses—including commercial use. The free reusability of open-source software can speed up development. == See also == Bytecode Code as data Coding conventions Free software Legacy code Machine code Markup language Obfuscated code Object code Open-source software Package (package management system) Programming language Source code repository Syntax highlighting Visual programming language == References == === Sources === == External links ==
https://en.wikipedia.org/wiki/Source_code
Historically, some programming languages have been specifically designed for artificial intelligence (AI) applications. Nowadays, many general-purpose programming languages also have libraries that can be used to develop AI applications. == General-purpose languages == Python is a high-level, general-purpose programming language that is popular in artificial intelligence. It has a simple, flexible and easily readable syntax. Its popularity results in a vast ecosystem of libraries, including for deep learning, such as PyTorch, TensorFlow, Keras, Google JAX. The library NumPy can be used for manipulating arrays, SciPy for scientific and mathematical analysis, Pandas for analyzing table data, Scikit-learn for various machine learning tasks, NLTK and spaCy for natural language processing, OpenCV for computer vision, and Matplotlib for data visualization. Hugging Face's transformers library can manipulate large language models. Jupyter Notebooks can execute cells of Python code, retaining the context between the execution of cells, which usually facilitates interactive data exploration. Elixir is a high-level functional programming language based on the Erlang VM. Its machine-learning ecosystem includes Nx for computing on CPUs and GPUs, Bumblebee and Axon for serving and training models, Broadway for distributed processing pipelines, Membrane for image and video processing, Livebook for prototyping and publishing notebooks, and Nerves for embedding on devices. R is widely used in new-style artificial intelligence, involving statistical computations, numerical analysis, the use of Bayesian inference, neural networks and in general machine learning. In domains like finance, biology, sociology or medicine it is considered one of the main standard languages. It offers several paradigms of programming like vectorial computation, functional programming and object-oriented programming. Lisp was the first language developed for artificial intelligence. It includes features intended to support programs that could perform general problem solving, such as lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, generators (streams), and cooperative multitasking. MATLAB is a proprietary numerical computing language developed by MathWorks. MATLAB has many toolboxes specifically for the development of AI including the Statistics and Machine Learning Toolbox and Deep Learning Toolbox. These toolboxes provide APIs for the high-level and low-level implementation and use of many types of machine learning models that can integrate with the rest of the MATLAB ecosystem. These libraries also have support for code generation for embedded hardware. C++ is a compiled language that can interact with low-level hardware. In the context of AI, it is particularly used for embedded systems and robotics. Libraries such as TensorFlow C++, Caffe or Shogun can be used. JavaScript is widely used for web applications and can notably be executed with web browsers. Libraries for AI include TensorFlow.js, Synaptic and Brain.js. Julia is a language launched in 2012, which intends to combine ease of use and performance. It is mostly used for numerical analysis, computational science, and machine learning. C# can be used to develop high level machine learning models using Microsoft’s .NET suite. ML.NET was developed to aid integration with existing .NET projects, simplifying the process for existing software using the .NET platform. Smalltalk has been used extensively for simulations, neural networks, machine learning, and genetic algorithms. It implements a pure and elegant form of object-oriented programming using message passing. Haskell is a purely functional programming language. Lazy evaluation and the list and LogicT monads make it easy to express non-deterministic algorithms, which is often the case. Infinite data structures are useful for search trees. The language's features enable a compositional way to express algorithms. Working with graphs is however a bit harder at first because of functional purity. Wolfram Language includes a wide range of integrated machine learning abilities, from highly automated functions like Predict and Classify to functions based on specific methods and diagnostics. The functions work on many types of data, including numerical, categorical, time series, textual, and image. Mojo can run some Python programs, and supports programmability of AI hardware. It aims to combine the usability of Python with the performance of low-level programming languages like C++ or Rust. == Specialized languages == Prolog is a declarative language where programs are expressed in terms of relations, and execution occurs by running queries over these relations. Prolog is particularly useful for symbolic reasoning, database and language parsing applications. Artificial Intelligence Markup Language (AIML) is an XML dialect for use with Artificial Linguistic Internet Computer Entity (A.L.I.C.E.)-type chatterbots. Planner is a hybrid between procedural and logical languages. It gives a procedural interpretation to logical sentences where implications are interpreted with pattern-directed inference. Stanford Research Institute Problem Solver (STRIPS) is a language to express automated planning problem instances. It expresses an initial state, the goal states, and a set of actions. For each action preconditions (what must be established before the action is performed) and postconditions (what is established after the action is performed) are specified. POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham which hosts the Poplog website, It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions. CycL is a special-purpose language for Cyc. == See also == Glossary of artificial intelligence List of constraint programming languages List of computer algebra systems List of logic programming languages List of constructed languages Fifth-generation programming language == Notes == == References ==
https://en.wikipedia.org/wiki/List_of_programming_languages_for_artificial_intelligence
Cocoa is Apple's native object-oriented application programming interface (API) for its desktop operating system macOS. Cocoa consists of the Foundation Kit, Application Kit, and Core Data frameworks, as included by the Cocoa.h header file, and the libraries and frameworks included by those, such as the C standard library and the Objective-C runtime. Cocoa applications are typically developed using the development tools provided by Apple, specifically Xcode (formerly Project Builder) and Interface Builder (now part of Xcode), using the programming languages Objective-C or Swift. However, the Cocoa programming environment can be accessed using other tools. It is also possible to write Objective-C Cocoa programs in a simple text editor and build it manually with GNU Compiler Collection (GCC) or Clang from the command line or from a makefile. For end users, Cocoa applications are those written using the Cocoa programming environment. Such applications usually have a familiar look and feel, since the Cocoa programming environment provides a lot of common UI elements (such as buttons, scroll bars, etc.), and automates many aspects of an application to comply with Apple's human interface guidelines. For iOS, iPadOS, tvOS, and watchOS, APIs similar to Application Kit, named UIKit and WatchKit, are available; they include gesture recognition, animation, and a different set of graphical control elements that are designed to accommodate the specific platforms they target. Foundation Kit and Core Data are also available in those operating systems. It is used in applications for Apple devices such as the iPhone, the iPod Touch, the iPad, the Apple TV, and the Apple Watch. == History == Cocoa continues the lineage of several software frameworks (mainly the App Kit and Foundation Kit) from the NeXTSTEP and OpenStep programming environments developed by NeXT in the 1980s and 1990s. Apple acquired NeXT in December 1996, and subsequently went to work on the Rhapsody operating system that was to be the direct successor of OPENSTEP. It was to have had an emulation base for classic Mac OS applications, named Blue Box. The OpenStep base of libraries and binary support was termed Yellow Box. Rhapsody evolved into Mac OS X, and the Yellow Box became Cocoa. Thus, Cocoa classes begin with the letters NS, such as NSString or NSArray. These stand for the original proprietary term for the OpenStep framework, NeXTSTEP. Much of the work that went into developing OpenStep was applied to developing Mac OS X, Cocoa being the most visible part. However, differences exist. For example, NeXTSTEP and OpenStep used Display PostScript for on-screen display of text and graphics, while Cocoa depends on Apple's Quartz (which uses the Portable Document Format (PDF) imaging model, but not its underlying technology). Cocoa also has a level of Internet support, including the NSURL and WebKit HTML classes, and others, while OpenStep had only rudimentary support for managed network connections via NSFileHandle classes and Berkeley sockets. The API toolbox was originally called “Yellow Box” and was renamed to Cocoa - a name that had been already trademarked by Apple. Apple's Cocoa trademark had originated as the name of a multimedia project design application for children. The name was intended to evoke "Java for kids", as it ran embedded in web pages. The original "Cocoa" program was discontinued following the return of Steve Jobs to Apple. At the time, Java was a big focus area for the company, so “Cocoa” was used as the new name for “Yellow Box” because, in addition to the native Objective-C usage, it could also be accessed from Java via a bridging layer. Even though Apple discontinued support for the Cocoa Java bridge, the name continued and was even used for the Cocoa Touch API. == Memory management == One feature of the Cocoa environment is its facility for managing dynamically allocated memory. Foundation Kit's NSObject class, from which most classes, both vendor and user, are derived, implements a reference counting scheme for memory management. Objects that derive from the NSObject root class respond to a retain and a release message, and keep a retain count. A method titled retainCount exists, but contrary to its name, will usually not return the exact retain count of an object. It is mainly used for system-level purposes. Invoking it manually is not recommended by Apple. A newly allocated object created with alloc or copy has a retain count of one. Sending that object a retain message increments the retain count, while sending it a release message decrements the retain count. When an object's retain count reaches zero, it is deallocated by a procedure similar to a C++ destructor. dealloc is not guaranteed to be invoked. Starting with Objective-C 2.0, the Objective-C runtime implemented an optional garbage collector, which is now obsolete and deprecated in favor of Automatic Reference Counting (ARC). In this model, the runtime turned Cocoa reference counting operations such as "retain" and "release" into no-ops. The garbage collector does not exist on the iOS implementation of Objective-C 2.0. Garbage collection in Objective-C ran on a low-priority background thread, and can halt on Cocoa's user events, with the intention of keeping the user experience responsive. The legacy garbage collector is still available on Mac OS X version 10.13, but no Apple-provided applications use it. In 2011, the LLVM compiler introduced Automatic Reference Counting (ARC), which replaces the conventional garbage collector by performing static analysis of Objective-C source code and inserting retain and release messages as necessary. == Main frameworks == Cocoa consists of three Objective-C object libraries called frameworks. Frameworks are functionally similar to shared libraries, a compiled object that can be dynamically loaded into a program's address space at runtime, but frameworks add associated resources, header files, and documentation. The Cocoa frameworks are implemented as a type of bundle, containing the aforementioned items in standard locations. Foundation Kit (Foundation), first appeared in Enterprise Objects Framework on NeXTSTEP 3. It was developed as part of the OpenStep work, and subsequently became the basis for OpenStep's AppKit when that system was released in 1994. On macOS, Foundation is based on Core Foundation. Foundation is a generic object-oriented library providing string and value manipulation, containers and iteration, distributed computing, event loops (run loops), and other functions that are not directly tied to the graphical user interface. The "NS" prefix, used for all classes and constants in the framework, comes from Cocoa's OPENSTEP heritage, which was jointly developed by NeXT and Sun Microsystems. Application Kit (AppKit) is directly descended from the original NeXTSTEP Application Kit. It contains code programs can use to create and interact with graphical user interfaces. AppKit is built on top of Foundation, and uses the same NS prefix. Core Data is the object persistence framework included with Foundation and Cocoa and found in Cocoa.h. A key part of the Cocoa architecture is its comprehensive views model. This is organized along conventional lines for an application framework, but is based on the Portable Document Format (PDF) drawing model provided by Quartz. This allows creating custom drawing content using PostScript-like drawing commands, which also allows automatic printer support and so forth. Since the Cocoa framework manages all the clipping, scrolling, scaling and other chores of drawing graphics, the programmer is freed from implementing basic infrastructure and can concentrate on the unique aspects of an application's content. == Model–view–controller == The Smalltalk teams at Xerox PARC eventually settled on a design philosophy that led to easy development and high code reuse. Named model–view–controller (MVC), the concept breaks an application into three sets of interacting object classes: Model classes represent problem domain data and operations (such as lists of people/departments/budgets; documents containing sections/paragraphs/footnotes of stylized text). View classes implement visual representations and affordances for human-computer interaction (such as scrollable grids of captioned icons and pop-up menus of possible operations). Controller classes contain logic that surfaces model data as view representations, maps affordance-initiated user actions to model operations, and maintains state to keep the two synchronized. Cocoa's design is a fairly, but not absolutely strict application of MVC principles. Under OpenStep, most of the classes provided were either high-level View classes (in AppKit) or one of a number of relatively low-level model classes like NSString. Compared to similar MVC systems, OpenStep lacked a strong model layer. No stock class represented a "document," for instance. During the transition to Cocoa, the model layer was expanded greatly, introducing a number of pre-rolled classes to provide functionality common to desktop applications. In Mac OS X 10.3, Apple introduced the NSController family of classes, which provide predefined behavior for the controller layer. These classes are considered part of the Cocoa Bindings system, which also makes extensive use of protocols such as Key-Value Observing and Key-Value Binding. The term 'binding' refers to a relationship between two objects, often between a view and a controller. Bindings allow the developer to focus more on declarative relationships rather than orchestrating fine-grained behavior. With the arrival of Mac OS X 10.4, Apple extended this foundation further by introducing the Core Data framework, which standardizes change tracking and persistence in the model layer. In effect, the framework greatly simplifies the process of making changes to application data, undoing changes when necessary, saving data to disk, and reading it back in. In providing framework support for all three MVC domains, Apple's goal is to reduce the amount of boilerplate or "glue" code that developers have to write, freeing up resources to spend time on application-specific features. == Late binding == In most object-oriented languages, calls to methods are represented physically by a pointer to the code in memory. This restricts the design of an application since specific command handling classes are needed, usually organized according to the chain-of-responsibility pattern. While Cocoa retains this approach for the most part, Objective-C's late binding opens up more flexibility. Under Objective-C, methods are represented by a selector, a string describing the method to call. When a message is sent, the selector is sent into the Objective-C runtime, matched against a list of available methods, and the method's implementation is called. Since the selector is text data, this lets it be saved to a file, transmitted over a network or between processes, or manipulated in other ways. The implementation of the method is looked up at runtime, not compile time. There is a small performance penalty for this, but late binding allows the same selector to reference different implementations. By a similar token, Cocoa provides a pervasive data manipulation method called key-value coding (KVC). This allows a piece of data or property of an object to be looked up or changed at runtime by name. The property name acts as a key to the value. In traditional languages, this late binding is impossible. KVC leads to great design flexibility. An object's type need not be known, yet any property of that object can be discovered using KVC. Also, by extending this system using something Cocoa terms key-value observing (KVO), automatic support for undo-redo is provided. Late static binding is a variant of binding somewhere between static and dynamic binding. The binding of names before the program is run is called static (early); bindings performed as the program runs are dynamic (late or virtual). == Rich objects == One of the most useful features of Cocoa is the powerful base objects the system supplies. As an example, consider the Foundation classes NSString and NSAttributedString, which provide Unicode strings, and the NSText system in AppKit, which allows the programmer to place string objects in the GUI. NSText and its related classes are used to display and edit strings. The collection of objects involved permit an application to implement anything from a simple single-line text entry field to a complete multi-page, multi-column text layout schema, with full professional typography features such as kerning, ligatures, running text around arbitrary shapes, rotation, full Unicode support, and anti-aliased glyph rendering. Paragraph layout can be controlled automatically or by the user, using a built-in "ruler" object that can be attached to any text view. Spell checking is automatic, using a system-wide set of language dictionaries. Unlimited undo/redo support is built in. Using only the built-in features, one can write a text editor application in as few as 10 lines of code. With new controller objects, this may fall towards zero. When extensions are needed, Cocoa's use of Objective-C makes this a straightforward task. Objective-C includes the concept of "categories," which allows modifying existing class "in-place". Functionality can be accomplished in a category without any changes to the original classes in the framework, or even access to its source. In other common languages, this same task requires deriving a new subclass supporting the added features, and then replacing all instances of the original class with instances of the new subclass. == Implementations and bindings == The Cocoa frameworks are written in Objective-C. Java bindings for the Cocoa frameworks (termed the Java bridge) were also made available with the aim of replacing Objective-C with a more popular language but these bindings were unpopular among Cocoa developers and Cocoa's message passing semantics did not translate well to a statically-typed language such as Java. Cocoa's need for runtime binding means many of Cocoa's key features are not available with Java. In 2005, Apple announced that the Java bridge was to be deprecated, meaning that features added to Cocoa in macOS versions later than 10.4 would not be added to the Cocoa-Java programming interface. At Apple Worldwide Developers Conference (WWDC) 2014, Apple introduced a new programming language named Swift, which is intended to replace Objective-C. === AppleScriptObjC === Originally, AppleScript Studio could be used to develop simpler Cocoa applications. However, as of Snow Leopard, it has been deprecated. It was replaced with AppleScriptObjC, which allows programming in AppleScript, while using Cocoa frameworks. === Other bindings === The Cocoa programming environment can be accessed using other tools with the aid of bridge mechanisms such as PasCocoa, PyObjC, CamelBones, RubyCocoa, and a D/Objective-C Bridge. Third-party bindings available for other languages include AppleScript, Clozure CL, Monobjc and NObjective (C#), Cocoa# (CLI), Cocodao and D/Objective-C Bridge, LispWorks, Object Pascal, CamelBones (Perl), PyObjC (Python), FPC PasCocoa (Lazarus and Free Pascal), RubyCocoa (Ruby). A Ruby language implementation named MacRuby, which removes the need for a bridge mechanism, was formerly developed by Apple, while Nu is a Lisp-like language that uses the Objective-C object model directly, and thus can use the Cocoa frameworks without needing a binding. === Other implementations === There are also open source implementations of major parts of the Cocoa framework, such as GNUstep and Cocotron, which allow cross-platform Cocoa application development to target other operating systems, such as Microsoft Windows and Linux. == See also == == References == == Bibliography == Aaron Hillegass: Cocoa Programming for Mac OS X, Addison-Wesley, 3rd Edition 2008, Paperback, ISBN 0-321-50361-9. Stephen Kochan: Programming in Objective-C, Sams, 1st Edition 2003, Paperback, ISBN 0-672-32586-1. Michael Beam, James Duncan Davidson: Cocoa in a Nutshell, O'Reilly, 1st Edition 2003, Paperback, ISBN 0-596-00462-1. Erick Tejkowski: Cocoa Programming for Dummies, 1st Edition 2003, Paperback, ISBN 0-7645-2613-8. Garfinkel, Simson; Mahoney, Michael K. (2002). Building Cocoa Applications: A Step by Step Guide (1st ed.). O'Reilly Media. CiteSeerX 10.1.1.394.3248. ISBN 0-596-00235-1. Paris Buttfield-Addison, Jon Manning: Learning Cocoa with Objective-C, O'Reilly, 3rd Edition 2012, Paperback, ISBN 978-1-4493-1849-9. Scott Anguish, Erik M. Buck, Donald A. Yacktman: Cocoa Programming, Sams, 1st Edition 2002, Paperback, ISBN 0-672-32230-7. Erik M. Buck, Donald A. Yacktman: Cocoa Design Patterns, Addison-Wesley Professional, 1st Edition 2009, Paperback, ISBN 978-0321535023 Bill Cheeseman: Cocoa Recipes for Mac OS X, Peachpit Press, 1st Edition 2002, Paperback, ISBN 0-201-87801-1. Andrew Duncan: Objective-C Pocket Reference, O'Reilly, 1st Edition 2002, Paperback, ISBN 0-596-00423-0. == External links == Mac Developer Library, Cocoa Layer, Apple's documentation iDevApps, Mac programming forum Cocoa Dev Central Cocoa Dev Stack Overflow: Cocoa
https://en.wikipedia.org/wiki/Cocoa_(API)
In mathematics and in computer programming, a variadic function is a function of indefinite arity, i.e., one which accepts a variable number of arguments. Support for variadic functions differs widely among programming languages. The term variadic is a neologism, dating back to 1936/1937. The term was not widely used until the 1970s. == Overview == There are many mathematical and logical operations that come across naturally as variadic functions. For instance, the summing of numbers or the concatenation of strings or other sequences are operations that can be thought of as applicable to any number of operands (even though formally in these cases the associative property is applied). Another operation that has been implemented as a variadic function in many languages is output formatting. The C function printf and the Common Lisp function format are two such examples. Both take one argument that specifies the formatting of the output, and any number of arguments that provide the values to be formatted. Variadic functions can expose type-safety problems in some languages. For instance, C's printf, if used incautiously, can give rise to a class of security holes known as format string attacks. The attack is possible because the language support for variadic functions is not type-safe: it permits the function to attempt to pop more arguments off the stack than were placed there, corrupting the stack and leading to unexpected behavior. As a consequence of this, the CERT Coordination Center considers variadic functions in C to be a high-severity security risk. In functional programming languages, variadics can be considered complementary to the apply function, which takes a function and a list/sequence/array as arguments, and calls the function with the arguments supplied in that list, thus passing a variable number of arguments to the function. In the functional language Haskell, variadic functions can be implemented by returning a value of a type class T; if instances of T are a final return value r and a function (T t) => x -> t, this allows for any number of additional arguments x. A related subject in term rewriting research is called hedges, or hedge variables. Unlike variadics, which are functions with arguments, hedges are sequences of arguments themselves. They also can have constraints ('take no more than 4 arguments', for example) to the point where they are not variable-length (such as 'take exactly 4 arguments') - thus calling them variadics can be misleading. However they are referring to the same phenomenon, and sometimes the phrasing is mixed, resulting in names such as variadic variable (synonymous to hedge). Note the double meaning of the word variable and the difference between arguments and variables in functional programming and term rewriting. For example, a term (function) can have three variables, one of them a hedge, thus allowing the term to take three or more arguments (or two or more if the hedge is allowed to be empty). == Examples == === In C === To portably implement variadic functions in the C language, the standard stdarg.h header file is used. The older varargs.h header has been deprecated in favor of stdarg.h. In C++, the header file cstdarg is used. This will compute the average of an arbitrary number of arguments. Note that the function does not know the number of arguments or their types. The above function expects that the types will be int, and that the number of arguments is passed in the first argument (this is a frequent usage but by no means enforced by the language or compiler). In some other cases, for example printf, the number and types of arguments are figured out from a format string. In both cases, this depends on the programmer to supply the correct information. (Alternatively, a sentinel value like NULL or nullptr may be used to indicate the end of the parameter list.) If fewer arguments are passed in than the function believes, or the types of arguments are incorrect, this could cause it to read into invalid areas of memory and can lead to vulnerabilities like the format string attack. Depending on the system, even using NULL as a sentinel may encounter such problems; nullptr or a dedicated null pointer of the correct target type may be used to avoid them. stdarg.h declares a type, va_list, and defines four macros: va_start, va_arg, va_copy, and va_end. Each invocation of va_start and va_copy must be matched by a corresponding invocation of va_end. When working with variable arguments, a function normally declares a variable of type va_list (ap in the example) that will be manipulated by the macros. va_start takes two arguments, a va_list object and a reference to the function's last parameter (the one before the ellipsis; the macro uses this to get its bearings). In C23, the second argument will no longer be required and variadic functions will no longer need a named parameter before the ellipsis. It initialises the va_list object for use by va_arg or va_copy. The compiler will normally issue a warning if the reference is incorrect (e.g. a reference to a different parameter than the last one, or a reference to a wholly different object), but will not prevent compilation from completing normally. va_arg takes two arguments, a va_list object (previously initialised) and a type descriptor. It expands to the next variable argument, and has the specified type. Successive invocations of va_arg allow processing each of the variable arguments in turn. Unspecified behavior occurs if the type is incorrect or there is no next variable argument. va_end takes one argument, a va_list object. It serves to clean up. If one wanted to, for instance, scan the variable arguments more than once, the programmer would re-initialise your va_list object by invoking va_end and then va_start again on it. va_copy takes two arguments, both of them va_list objects. It clones the second (which must have been initialised) into the first. Going back to the "scan the variable arguments more than once" example, this could be achieved by invoking va_start on a first va_list, then using va_copy to clone it into a second va_list. After scanning the variable arguments a first time with va_arg and the first va_list (disposing of it with va_end), the programmer could scan the variable arguments a second time with va_arg and the second va_list. va_end needs to also be called on the cloned va_list before the containing function returns. === In C# === C# describes variadic functions using the params keyword. A type must be provided for the arguments, although object[] can be used as a catch-all. At the calling site, you can either list the arguments one by one, or hand over a pre-existing array having the required element type. Using the variadic form is Syntactic sugar for the latter. === In C++ === The basic variadic facility in C++ is largely identical to that in C. The only difference is in the syntax, where the comma before the ellipsis can be omitted. C++ allows variadic functions without named parameters but provides no way to access those arguments since va_start requires the name of the last fixed argument of the function. Variadic templates (parameter pack) can also be used in C++ with language built-in fold expressions. The CERT Coding Standards for C++ strongly prefers the use of variadic templates (parameter pack) in C++ over the C-style variadic function due to a lower risk of misuse. === In Fortran === Since the Fortran 90 revision, Fortran functions or subroutines can accept optional arguments: the argument list is still fixed, but the ones that have the optional attribute can be omitted in the function/subroutine call. The intrinsic function present() can be used to detect the presence of an optional argument. The optional arguments can appear anywhere in the argument list. Output: The sum of [1 2] is 3 The sum of [1 2 3] is 6 The sum of [1 2 3 4] is 10 === In Go === Variadic functions in Go can be called with any number of trailing arguments. fmt.Println is a common variadic function; it uses an empty interface as a catch-all type. Output: The sum of [1 2] is 3 The sum of [1 2 3] is 6 The sum of [1 2 3 4] is 10 === In Java === As with C#, the Object type in Java is available as a catch-all. === In JavaScript === JavaScript does not care about types of variadic arguments. It's also possible to create a variadic function using the arguments object, although it is only usable with functions created with the function keyword. === In Lua === Lua functions may pass varargs to other functions the same way as other values using the return keyword. tables can be passed into variadic functions by using, in Lua version 5.2 or higher table.unpack, or Lua 5.1 or lower unpack. Varargs can be used as a table by constructing a table with the vararg as a value. === In Pascal === Pascal is standardized by ISO standards 7185 (“Standard Pascal”) and 10206 (“Extended Pascal”). Neither standardized form of Pascal supports variadic routines, except for certain built-in routines (read/readLn and write/writeLn, and additionally in EP readStr/writeStr). Nonetheless, dialects of Pascal implement mechanisms resembling variadic routines. Delphi defines an array of const data type that may be associated with the last formal parameter. Within the routine definition the array of const is an array of TVarRec, an array of variant records. The VType member of the aforementioned record data type allows inspection of the argument’s data type and subsequent appropriate handling. The Free Pascal Compiler supports Delphi’s variadic routines, too. This implementation, however, technically requires a single argument, that is an array. Pascal imposes the restriction that arrays need to be homogenous. This requirement is circumvented by utilizing a variant record. The GNU Pascal defines a real variadic formal parameter specification using an ellipsis (...), but as of 2022 no portable mechanism to use such has been defined. Both GNU Pascal and FreePascal allow externally declared functions to use a variadic formal parameter specification using an ellipsis (...). === In PHP === PHP does not care about types of variadic arguments unless the argument is typed. And typed variadic arguments: === In Python === Python does not care about types of variadic arguments. Keyword arguments can be stored in a dictionary, e.g. def bar(*args, **kwargs). === In Raku === In Raku, the type of parameters that create variadic functions are known as slurpy array parameters and they're classified into three groups: ==== Flattened slurpy ==== These parameters are declared with a single asterisk (*) and they flatten arguments by dissolving one or more layers of elements that can be iterated over (i.e, Iterables). ==== Unflattened slurpy ==== These parameters are declared with two asterisks (**) and they do not flatten any iterable arguments within the list, but keep the arguments more or less as-is: ==== Contextual slurpy ==== These parameters are declared with a plus (+) sign and they apply the "single argument rule", which decides how to handle the slurpy argument based upon context. Simply put, if only a single argument is passed and that argument is iterable, that argument is used to fill the slurpy parameter array. In any other case, +@ works like **@ (i.e., unflattened slurpy). === In Ruby === Ruby does not care about types of variadic arguments. === In Rust === Rust does not support variadic arguments in functions. Instead, it uses macros. Rust is able to interact with C's variadic system via a c_variadic feature switch. As with other C interfaces, the system is considered unsafe to Rust. === In Scala === === In Swift === Swift cares about the type of variadic arguments, but the catch-all Any type is available. === In Tcl === A Tcl procedure or lambda is variadic when its last argument is args: this will contain a list (possibly empty) of all the remaining arguments. This pattern is common in many other procedure-like methods. == See also == Varargs in Java programming language Variadic macro (C programming language) Variadic template == Notes == == References == == External links == Variadic function. Rosetta Code task showing the implementation of variadic functions in over 120 programming languages. Variable Argument Functions — A tutorial on Variable Argument Functions for C++ GNU libc manual
https://en.wikipedia.org/wiki/Variadic_function
The University of Pennsylvania (Penn or UPenn) is a private Ivy League research university in Philadelphia, Pennsylvania, United States. It is one of nine colonial colleges and was chartered prior to the U.S. Declaration of Independence when Benjamin Franklin, the university's founder and first president, advocated for an educational institution that trained leaders in academia, commerce, and public service. The university has four undergraduate schools and 12 graduate and professional schools. Schools enrolling undergraduates include the College of Arts and Sciences, the School of Engineering and Applied Science, the Wharton School, and the School of Nursing. Among its graduate schools are its law school, whose first professor James Wilson participated in writing the first draft of the U.S. Constitution, and its medical school, which was the first medical school established in North America. In 2023, Penn ranked third among U.S. universities in research expenditures, according to the National Science Foundation. Its endowment is $22.3 billion, making it the sixth-wealthiest private academic institution in the nation as of 2024. The University of Pennsylvania's main campus is located in the University City neighborhood of West Philadelphia, and is centered around College Hall. Notable campus landmarks include Houston Hall, the first modern student union, and Franklin Field, the nation's first dual-level college football stadium and the nation's longest-standing NCAA Division I college football stadium in continuous operation. The university's athletics program, the Penn Quakers, fields varsity teams in 33 sports as a member of NCAA Division I's Ivy League conference. Penn alumni, trustees, and faculty include eight Founding Fathers of the United States who signed the Declaration of Independence, seven who signed the U.S. Constitution, 24 members of the Continental Congress, three Presidents of the United States (the 9th, 45th, 46th, and 47th Presidents), 38 Nobel laureates, nine foreign heads of state, three United States Supreme Court justices, at least four Supreme Court justices of foreign nations, 32 U.S. senators, 163 members of the U.S. House of Representatives, 19 U.S. Cabinet Secretaries, 46 governors, 28 State Supreme Court justices, 36 living undergraduate billionaires (the largest number of any U.S. college or university), and five Medal of Honor recipients. == History == In 1740, a group of Philadelphians organized to erect a great preaching hall for George Whitefield, a traveling Anglican evangelist, which was designed and constructed by Edmund Woolley. It was the largest building in Philadelphia at the time, and thousands of people attended it to hear Whitefield preach.: 26  In the fall of 1749, Benjamin Franklin, a Founding Father and polymath in Philadelphia, circulated a pamphlet, "Proposals Relating to the Education of Youth in Pensilvania," his vision for what he called a "Public Academy of Philadelphia". On June 16, 1755, the College of Philadelphia was chartered, paving the way for the addition of undergraduate instruction. Penn identifies as the fourth-oldest institution of higher education in the United States, though this representation is challenged by Princeton and Columbia since the College of Philadelphia was not chartered or commence classes until 1755 and the first board of trustees was not convened until 1749, arguably making it the sixth or fifth-oldest. == Campus == The University of Pennsylvania's campus spans approximately 299 acres in West Philadelphia, featuring a blend of historic and modern architecture. Key facilities include the Perelman Center for Advanced Medicine, the Penn Museum, and the recently constructed Pennovation Center, which serves as a hub for innovation and entrepreneurship. Much of the current architecture on Penn's campus was designed by the Philadelphia-based architecture firm Cope and Stewardson, whose owners were Philadelphia born and raised architects and Penn professors who also designed Princeton University and a large part of Washington University in St. Louis. They were known for having combined the Gothic architecture of the University of Oxford and University of Cambridge with the local landscape to establish the Collegiate Gothic style. Penn's main artery at center of Penn's Campus Historic District is Locust Walk, a pedestrian only walkway first announced by Penn President, Harold Stassen in 1948. Work began in the summer of 1960, and was completed in 1972. The present core campus covers over 299 acres (121 ha) in a contiguous area of West Philadelphia's University City section, and the older heart of the campus comprises the University of Pennsylvania Campus Historic District. All of Penn's schools and most of its research institutes are located on this campus. The surrounding neighborhood includes several restaurants, bars, a large upscale grocery store, and a movie theater on the western edge of campus. Penn's core campus borders Drexel University and is a few blocks from the University City campus of Saint Joseph's University, which absorbed University of the Sciences in Philadelphia in a merger, and The Restaurant School at Walnut Hill College. Wistar Institute, a cancer research center, is also located on Penn's campus. In 2014, a new seven-story glass and steel building was completed next to the institute's original brick edifice built in 1897 further expanding collaboration between the university and the Wistar Institute. The Module 6 Utility Plant and Garage at Penn was designed by BLT Architects and completed in 1995. Module 6 is located at 38th and Walnut and includes spaces for 627 vehicles, 9,000 sq ft (840 m2) of storefront retail operations, a 9,500-ton chiller module and corresponding extension of the campus chilled water loop, and a 4,000-ton ice storage facility. In 2010, in its first significant expansion across the Schuylkill River, Penn purchased 23 acres (9.3 ha) at the northwest corner of 34th Street and Grays Ferry Avenue, the then site of DuPont's Marshall Research Labs. In October 2016, with help from architects Matthias Hollwich, Marc Kushner, and KSS Architects, Penn completed the design and renovation of the center piece of the project, a former paint factory named Pennovation Works, which houses shared desks, wet labs, common areas, a pitch bleacher, and other attributes of a tech incubator. The rest of the site, known as South Bank, is a mixture of lightly refurbished industrial buildings that serve as affordable and flexible workspaces and land for future development. Penn hopes that "South Bank will provide a place for academics, researchers, and entrepreneurs to establish their businesses in close proximity to each other to facilitate cross-pollination of their ideas, creativity, and innovation," according to a March 2017 university statement. === Parks and arboreta === In 2007, Penn acquired about 35 acres (14 ha) between the campus and the Schuylkill River at the former site of the Philadelphia Civic Center and a nearby 24-acre (9.7 ha) site then owned by the United States Postal Service. Dubbed the Postal Lands, the site extends from Market Street on the north to Penn's Bower Field on the south, including the former main regional U.S. Postal Building at 30th and Market Streets, now the regional office for the U.S. Internal Revenue Service. Over the next decade, the site became the home to educational, research, biomedical, and mixed-use facilities. The first phase, comprising a park and athletic facilities, opened in the fall of 2011. In September 2011, Penn completed the construction of the $46.5 million, 24-acre (9.7 ha) Penn Park, which features passive and active recreation and athletic components framed and subdivided by canopy trees, lawns, and meadows. It is located east of the Highline Green and stretches from Walnut to South Streets. Penn maintains two arboreta. The first, the roughly 300-acre (120 ha) Penn Campus Arboretum at the University of Pennsylvania, encompasses the entire University City main campus. The campus arboretum is an urban forest with over 6,500 trees representing 240 species of trees and shrubs, ten specialty gardens and five urban parks, which has been designated as a Tree Campus USA since 2009 and formally recognized as an accredited ArbNet Arboretum since 2017. Penn maintains an interactive website linked to Penn's comprehensive tree inventory, which allows users to explore Penn's entire collection of trees. The 92-acre second arboretum Morris Arboretum is the official arboretum of the Commonwealth of Pennsylvania and includes more than 13,000 labelled plants of 2,500 types, representing the temperate floras of North America, Asia, and Europe, with a primary focus on Asia. === New Bolton Center === Penn also owns the 687-acre (278 ha) New Bolton Center, the research and large-animal health care center of its veterinary school. Located near Kennett Square, New Bolton Center received nationwide media attention when Kentucky Derby winner Barbaro underwent surgery at its Widener Hospital for injuries suffered while running in the Preakness Stakes. === Libraries === Penn library system has grown into a system with 300 full-time equivalent (FTE) employees, and a total operating budget of more than $95 million. The library system has 6.19 million book and serial volumes as well as 4.23 million microform items and 1.11 million e-books. It subscribes to over 68,000 print serials and e-journals. The university has 19 libraries. Van Pelt Library on the Penn campus is the university's main library. The other 18 include: Annenberg School for Communication library located on Walnut Street between 36th and 37th Streets Archaeology and Anthropology Library located at the Penn Museum of Archaeology and Anthropology Biddle Law Library located on campus on the 3500 block of Sansom Street at the School of Law Chemistry Library located on campus on 3300 block of Spruce Street in the Chemistry Building Dental Medicine Library on campus on the 4000 the block of Locust Street at the Dental School Fisher Fine Arts Library located on campus on the 3400 block of Woodland Avenue Holman Biotech Commons library located on campus on the 3500 block of Hamilton Walk adjacent to the Robert Wood Johnson Pavilion at the Medical School and the Nursing School Humanities and Social Sciences Library, including Weigle Information Commons, located on campus between 34th and 35th streets on Locust Street in the Van Pelt Library Katz Center for Advanced Judaic Studies library located off campus at 420 Walnut Street near Independence Hall and Washington Square Lea Library, a collection of Catholic Church history, located on campus between 34th and 35th streets on Locust Street on the 6th floor of the Van Pelt Library Lippincott Business Library located on campus between 35th and 36th streets on Locust Street in the second floor of the Van Pelt Library Math/Physics/Astronomy library located on campus on 3200 block of Walnut Streets adjacent to The Palestra on the third floor of the David Rittenhouse Laboratory Rare Books and Manuscripts library and Yarnall Library of Theology located on campus between 34th and 35th streets on Locust Street in Van Pelt Library Veterinary Medicine Library located on the campus between 38th and 39th streets on Sansom Street at the Veterinary Medicine School with satellite library located off campus at New Bolton Center. Penn also maintains books and records off campus at high density storage facility. The Penn Design School's Fine Arts Library was built to be Penn's main library and the first with its own building. The main library at the time was designed by Frank Furness to be first library in nation to separate the low ceilings of the library stack, where the books were stored, from forty-foot-plus high ceilinged rooms, where the books were read and studied. The Yarnall Library of Theology, a major American rare book collection, is part of Penn's libraries. The Yarnall Library of Theology was formerly affiliated with St. Clement's Church in Philadelphia. It was founded in 1911 under the terms of the wills of Ellis Hornor Yarnall (1839–1907) and Emily Yarnall, and subsequently housed at the former Philadelphia Divinity School. The library's major areas of focus are theology, patristics, and the liturgy, history and theology of the Anglican Communion and the Episcopal Church in the United States of America. It includes a large number of rare books, incunabula, and illuminated manuscripts, and new material continues to be added. === Art installations === The campus has more than 40 notable art installations, in part because of a 1959 Philadelphia ordinance requiring total budget for new construction or major renovation projects in which governmental resources are used to include 1% for art to be used to pay for installation of site-specific public art, in part because many alumni collected and donated art to Penn, and in part because of the presence of the University of Pennsylvania School of Design on the campus. Alexander Archipenko's sculpture of King Solomon was initially loaned to Penn in 1985 by parents of a Penn student and donated in 1995 to honor the inauguration of Judith Rodin as Penn president in 1994. In 2020, Penn installed Brick House, a monumental work of art, created by Simone Leigh at the College Green gateway to Penn's campus near the corner of 34th Street and Woodland Walk. This 5,900-pound (2,700 kg) bronze sculpture, which is 16 feet (4.9 m) high and 9 feet (2.7 m) in diameter at its base, depicts an African woman's head crowned with an afro framed by cornrow braids atop a form that resembles both a skirt and a clay house. At the installation, Penn president Amy Guttman proclaimed that "Ms. Leigh's sculpture brings a striking presence of strength, grace, and beauty—along with an ineffable sense of mystery and resilience—to a central crossroad of Penn's campus." The Covenant, known to the student body as "Dueling Tampons" or "The Tampons," is a large red structure created by Alexander Liberman and located on Locust Walk as a gateway to the high-rise residences "super block." It was installed in 1975 and is made of rolled sheets of milled steel. A white button, known as The Button and officially called the Split Button is a modern art sculpture designed by designed by Swedish sculptor Claes Oldenburg (who specialized in creating oversize sculptures of everyday objects). It sits at the south entrance of Van Pelt Library and has button holes large enough for people to stand inside. Penn also has a replica of the Love sculpture, part of a series created by Robert Indiana. It is a painted aluminum sculpture and was installed in 1998 overlooking College Green. In 2019, the Association for Public Art loaned Penn two multi-ton sculptures. The works are Social Consciousness, created by Sir Jacob Epstein in 1954, and Atmosphere and Environment XII, created by Louise Nevelson in 1970. Until the loan, both works had been located at the West Entrance to the Philadelphia Museum of Art, the older since its creation and the Nevelson work since 1973. Social Consciousness was relocated to the walkway between Wharton's Lippincott Library and Phi Phi chapter of Alpha Chi Rho fraternity house, and Atmosphere and Environment XII is sited on Shoemaker Green between Franklin Field and Ringe Squash Courts. In addition to the contemporary art, Penn also has several traditional statues, including a good number created by Penn's first Director of Physical Education Department, R. Tait McKenzie. Among the notable sculptures is that of Young Ben Franklin, which McKenzie produced and Penn sited adjacent to the fieldhouse contiguous to Franklin Field. The sculpture is titled Benjamin Franklin in 1723 and was created by McKenzie during the pre-World War I era (1910–1914). Other sculptures he produced for Penn include the 1924 sculpture of then Penn provost Edgar Fahs Smith. Penn is presently reevaluating all of its public art and has formed a working group led by Penn Design dean Frederick Steiner, who was part of a similar effort at the University of Texas at Austin that led to the removal of statues of Jefferson Davis and other Confederate officials, and Penn's Chief Diversity Officer, Joann Mitchell. Penn has begun the process of adding art and removing or relocating art. Penn removed from campus in 2020 the statue of the Reverend George Whitefield (who had inspired the 1740 establishment of a trust to establish a charity school, which trust Penn legally assumed in 1749) when research showed Whitefield owned fifty enslaved people and drafted and advocated for the key theological arguments in favor of slavery in Georgia and the rest of the Thirteen Colonies. === Penn Museum === Since the founding of Penn Museum in 1887, it has taken part in 400 research projects worldwide. The museum's first project was an excavation of Nippur, a location in present-day Iraq. Penn Museum is home to the largest authentic sphinx in North America, which is about seven feet high, four feet wide, 13 feet long, 12.9 tons, and made of solid red granite. The sphinx was discovered in 1912 by the British archeologist, Sir William Matthew Flinders Petrie, during an excavation of the ancient Egyptian city of Memphis, Egypt, where the sphinx had guarded a temple to ward off evil. Since Petri's expedition was partially financed by Penn Petrie offered it to Penn, which arranged for it to be moved to museum in 1913. The sphinx was moved in 2019 to a more prominent spot intended to attract visitors. The museum has three gallery floors with artifacts from Egypt, the Middle East, Mesoamerica, Asia, the Mediterranean, Africa and indigenous artifacts of the Americas. Its most famous object is the goat rearing into the branches of a rosette-leafed plant, from the royal tombs of Ur. Penn Museum's excavations and collections foster a strong research base for graduate students in the Graduate Group in the Art and Archaeology of the Mediterranean World. Features of the Beaux-Arts building include a rotunda and gardens that include Egyptian papyrus. === Other Penn museums and galleries === Penn maintains a website providing a detailed roadmap to small museums and galleries and over one hundred locations across campus where the public can access Penn's over 8,000 artworks acquired over 250 years, which includes paintings, sculptures, photography, works on paper, and decorative arts. The largest of the art galleries is the Institute of Contemporary Art, one of the only kunsthalles in the country, which showcases various art exhibitions throughout the year. Since 1983, the Arthur Ross Gallery, located at the Fisher Fine Arts Library, has housed Penn's art collection and is named for its benefactor, philanthropist Arthur Ross. === Residences === Every College House at the University of Pennsylvania has at least four members of faculty in the roles of House Dean, Faculty Master, and College House Fellows. Within the College Houses, Penn has nearly 40 themed residential programs for students with shared interests such as world cinema or science and technology. Many of the nearby homes and apartments in the area surrounding the campus are often rented by undergraduate students moving off campus after their first year, as well as by graduate and professional students. The College Houses include W.E.B. Du Bois, Fisher Hassenfeld, Gregory, Gutmann, Harnwell, Harrison, Hill College House, Kings Court English, Lauder, Riepe, Rodin, Stouffer, and Ware. The first College House was Van Pelt College House, established in the fall of 1971. It was later renamed Gregory House. Fisher Hassenfeld, Ware and Riepe together make up one building called "The Quad." The latest College House to be built is Guttman (formerly named New College House West), which opened in the fall of 2021. Penn students in Junior or Senior year may live in the 45 sororities and fraternities governed by three student-run governing councils, Interfraternity Council, Intercultural Greek Council, and Panhellenic Council. == Organization == The College of Arts and Sciences is the undergraduate division of the School of Arts and Sciences. The School of Arts and Sciences also contains the Graduate Division and the College of Liberal and Professional Studies, which is home to the Fels Institute of Government, the master's programs in Organizational Dynamics, and the Environmental Studies (MES) program. Wharton School is the business school of the University of Pennsylvania. Other schools with undergraduate programs include the School of Nursing and the School of Engineering and Applied Science (SEAS). The current president is J. Larry Jameson (interim). === Campus police === The University of Pennsylvania Police Department (UPPD) is the largest private police department in Pennsylvania, with 117 members. All officers are sworn municipal police officers and retain general law enforcement authority while on the campus. === Seal === The official seal of the Trustees of the University of Pennsylvania serves as the signature and symbol of authenticity on documents issued by the corporation. The most recent design, a modified version of the original seal, was approved in 1932, adopted a year later and is still used for much of the same purposes as the original. The official seal of the Trustees of the University of Pennsylvania serves as the signature and symbol of authenticity on documents issued by the corporation. A request for one was first recorded in a meeting of the trustees in 1753 during which some of the Trustees "desired to get a Common Seal engraved for the Use of [the] Corporation." In 1756, a public seal and motto for the college was engraved in silver. The outer ring of the current seal is inscribed with "Universitas Pennsylvaniensis," the Latin name of the University of Pennsylvania. The inside contains seven stacked books on a desk with the titles of subjects of the trivium and a modified quadrivium, components of a classical education: Theolog[ia], Astronom[ia], Philosoph[ia], Mathemat[ica], Logica, Rhetorica and Grammatica. Between the books and the outer ring is the Latin motto of the university, "Leges Sine Moribus Vanae." == Academics == Penn's "One University Policy" allows students to enroll in classes in any of Penn's twelve schools. Penn has a strong focus on interdisciplinary learning and research. It offers double degree programs, unique majors, and academic flexibility. Penn's "One University" policy allows undergraduates access to courses at all of Penn's undergraduate and graduate schools except the medical, veterinary and dental schools. Undergraduates at Penn may also take courses at Bryn Mawr, Haverford, and Swarthmore under a reciprocal agreement known as the Quaker Consortium. === Admissions === * SAT and ACT ranges are from the 25th to the 75th percentile. Undergraduate admissions to the University of Pennsylvania is considered by US News to be "most selective." Admissions officials consider a student's GPA to be a very important academic factor, with emphasis on an applicant's high school class rank and letters of recommendation. Admission is need-blind for U.S., Canadian, and Mexican applicants. For the class of 2026, entering in Fall 2022, the university received 54,588 applications. The Atlantic also ranked Penn among the 10 most selective schools in the country. At the graduate level, based on admission statistics from U.S. News & World Report, Penn's most selective programs include its law school, the health care schools (medicine, dental medicine, nursing, veterinary), the School of Engineering and Applied Sciences, and the Wharton School. === Coordinated dual-degree, accelerated, interdisciplinary programs === Penn offers unique and specialized coordinated dual-degree (CDD) programs, which selectively award candidates degrees from multiple schools at the university upon completion of graduation criteria of both schools in addition to program-specific programs and senior capstone projects. Additionally, there are accelerated and interdisciplinary programs offered by the university. These undergraduate programs include: Huntsman Program in International Studies and Business Jerome Fisher Program in Management and Technology (M&T) Roy and Diana Vagelos Program in Life Sciences and Management (LSM) Nursing and Health Care Management (NHCM) Roy and Diana Vagelos Integrated Program in Energy Research (VIPER) Vagelos Scholars Program in Molecular Life Sciences (MLS) Singh Program in Networked and Social Systems Engineering (NETS) Digital Media Design (DMD) Computer and Cognitive Science: Artificial Intelligence Accelerated 7-Year Bio-Dental Program Accelerated 6-Year Law and Medicine Program Dual-degree programs that lead to the same multiple degrees without participation in the specific above programs are also available. Unlike CDD programs, "dual degree" students fulfill requirements of both programs independently without the involvement of another program. Specialized dual-degree programs include Liberal Studies and Technology as well as an Artificial Intelligence: Computer and Cognitive Science Program. Both programs award a degree from the College of Arts and Sciences and a degree from the School of Engineering and Applied Sciences. Also, the Vagelos Scholars Program in Molecular Life Sciences allows its students to either double major in the sciences or submatriculate and earn both a BA and an MS in four years. The most recent Vagelos Integrated Program in Energy Research (VIPER) was first offered for the class of 2016. A joint program of Penn's School of Arts and Sciences and the School of Engineering and Applied Science, VIPER leads to dual Bachelor of Arts and Bachelor of Science in Engineering degrees by combining majors from each school. For graduate programs, Penn offers many formalized double degree graduate degrees such as a joint J.D./MBA and maintains a list of interdisciplinary institutions, such as the Institute for Medicine and Engineering, the Joseph H. Lauder Institute for Management and International Studies, and the Institute for Research in Cognitive Science. The School of Social Policy and Practice, commonly known as Penn SP2, is a school of social policy and social work that offers degrees in a variety of subfields, in addition to several dual degree programs and sub-matriculation programs. Penn SP2's vision is: "The passionate pursuit of social innovation, impact and justice." Originally named the School of Social Work, SP2 was founded in 1908 and is a graduate school of the University of Pennsylvania. The school specializes in research, education, and policy development in relation to both social and economic issues. The School of Veterinary Medicine offers five dual-degree programs, combining the Doctor of Veterinary Medicine (VMD) with a Master of Social Work (MSW), Master of Environmental Studies (MES), Doctor of Philosophy (PhD), Master of Public Health (MPH) or Masters in Business Administration (MBA) degree. The Penn Vet dual-degree programs are meant to support veterinarians planning to engage in interdisciplinary work in the areas of human health, environmental health, and animal health and welfare. === Academic medical center and biomedical research complex === In 2018, the university's nursing school was ranked number one by Quacquarelli Symonds. That year, Quacquarelli Symonds also ranked Penn's school of Veterinary Medicine sixth. In 2019, the Perelman School of Medicine was named the third-best medical school for research in U.S. News & World Report's 2020 ranking. The University of Pennsylvania Health System, also known as UPHS, is a multi-hospital health system headquartered in Philadelphia, Pennsylvania, owned by Trustees of University of Pennsylvania. UPHS and the Perelman School of Medicine at the University of Pennsylvania together constitute Penn Medicine, a clinical and research entity of the University of Pennsylvania. UPHS hospitals include the Hospital of the University of Pennsylvania, Penn Presbyterian Medical Center, Pennsylvania Hospital, Chester County Hospital, Lancaster General Hospital, and Princeton Medical Center. Penn Medicine owns and operates the first hospital in the United States, the Pennsylvania Hospital. It is also home to America's first surgical amphitheatre and its first medical library. === International partnerships === Students can study abroad for a semester or a year at partner institutions, which include the Singapore Management University, London School of Economics, University of Edinburgh, Chinese University of Hong Kong, University of Melbourne, Sciences Po, University of Queensland, University College London, King's College London, Hebrew University of Jerusalem, and ETH Zurich. === Reputation and rankings === U.S. News & World Report's 2024 rankings place Penn 6th of 394 national universities in the United States. The Princeton Review student survey ranked Penn in 2023 as 7th in their Dream Colleges list. Penn was ranked 4th of 444 in the United States by College Factual for 2024. In 2023, Penn was ranked as having the 7th happiest students in the United States (the highest in the Ivy League). Wall Street Journal reported in 2024 that Penn's undergraduate alumni earned the 5th highest salaries (taking into account the cost of education and other factors), which was 2nd in Ivy League behind Princeton. Among its professional schools, the school of education was ranked number one in 2021 and Wharton School was ranked number one in 2022 and 2024 and the communication, dentistry, medicine, nursing, law and veterinary schools rank in the top 5 nationally. Penn's Law School was ranked number 4 in 2023 and Penn's School of Design and Architecture, and its School of Social Policy and Practice are ranked in the top 10. == Research == Penn is classified as an "R1" doctoral university: "Highest research activity." Its economic impact on the Commonwealth of Pennsylvania for 2015 amounted to $14.3 billion. Penn had research expenditures totaling over $1.9 billion in 2023, raking third among U.S. universities in research and development spending, according to the National Science Foundation. In fiscal year 2019 Penn received $582.3 million in funding from the National Institutes of Health. Penn's research centers often span two or more disciplines. In the 2010–2011 academic year, five interdisciplinary research centers were created or substantially expanded; these include the Center for Health-care Financing, the Center for Global Women's Health at the Nursing School, the Morris Arboretum's Horticulture Center, the Jay H. Baker Retailing Center at Wharton and the Translational Research Center at Penn Medicine. With these additions, Penn now counts 165 research centers hosting a research community of over 4,300 faculty and over 1,100 postdoctoral fellows, 5,500 academic support staff and graduate student trainees. To further assist the advancement of interdisciplinary research President Amy Gutmann established the "Penn Integrates Knowledge" title awarded to selected Penn professors "whose research and teaching exemplify the integration of knowledge." These professors hold endowed professorships and joint appointments between Penn's schools. Penn is also among the most prolific producers of doctoral students. With 487 PhDs awarded in 2009, Penn ranks third in the Ivy League behind Columbia and Cornell; Harvard did not report data. It also has one of the highest numbers of post-doctoral appointees (933 in number for 2004–2007), ranking third in the Ivy League (behind Harvard and Yale) and tenth nationally. In most disciplines Penn professors' productivity is among the highest in the nation and first in the fields of epidemiology, business, communication studies, comparative literature, languages, information science, criminal justice and criminology, social sciences and sociology. According to the National Research Council nearly three-quarters of Penn's 41 assessed programs were placed in ranges including the top 10 rankings in their fields, with more than half of these in ranges including the top five rankings in these fields. Penn's research tradition has historically been complemented by innovations that shaped higher education. In addition to establishing the first medical school, the first university teaching hospital, the oldest continuously operating degree-granting program in chemical engineering, the first business school, and the first student union, Penn was also the cradle of other significant developments. In 1852, Penn Law was the first law school in the nation to publish a law journal still in existence (then called The American Law Register, now the Penn Law Review, one of the most cited law journals in the world). Under the deanship of William Draper Lewis, the law school was also one of the first schools to emphasize legal teaching by full-time professors instead of practitioners, a system that is still followed today. The Wharton School was home to several pioneering developments in business education. It established the first research center in a business school in 1921 and the first center for entrepreneurship in 1973 and it regularly introduced novel curricula for which BusinessWeek wrote, "Wharton is on the crest of a wave of reinvention and change in management education." The university has also contributed major advancements in the fields of economics and management. Among the many discoveries are conjoint analysis, widely used as a predictive tool especially in market research, Simon Kuznets's method of measuring gross national product, the Penn effect (the observation that consumer price levels in richer countries are systematically higher than in poorer ones) and the "Wharton Model" developed by Nobel-laureate Lawrence Klein to measure and forecast economic activity. The idea behind Health Maintenance Organizations also belonged to Penn professor Robert Eilers, who put it into practice during then-president Nixon's health reform in the 1970s. Several major scientific discoveries have also taken place at Penn. The university is probably best known as the place where the first general-purpose electronic computer (ENIAC) was born in 1946 at the Moore School of Electrical Engineering. It was here also where the world's first spelling and grammar checkers were created, as well as the popular COBOL programming language. Penn can also boast some of the most important discoveries in the field of medicine. The dialysis machine used as an artificial replacement for lost kidney function was conceived and devised out of a pressure cooker by William Inouye while he was still a student at Penn Med; the Rubella and Hepatitis B vaccines were developed at Penn; the discovery of cancer's link with genes, cognitive therapy, Retin-A (the cream used to treat acne), Resistin, the Philadelphia gene (linked to chronic myelogenous leukemia) and the technology behind PET Scans were all discovered by Penn Med researchers. More recent gene research has led to the discovery of the genes for fragile X syndrome, the most common form of inherited mental retardation; spinal and bulbar muscular atrophy, a disorder marked by progressive muscle wasting; Charcot–Marie–Tooth disease, a progressive neurodegenerative disease that affects the hands, feet and limbs; and genetically engineered T cells used to treat lymphoblastic leukemia and refractory diffuse large B cell lymphoma. Another contribution to medicine was made by Ralph L. Brinster (Penn faculty member since 1965) who developed the scientific basis for in vitro fertilization and the transgenic mouse at Penn and was awarded the National Medal of Science in 2010. Penn professors Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa invented a conductive polymer process that earned them the Nobel Prize in Chemistry. The theory of superconductivity was also partly developed at Penn, by then-faculty member John Robert Schrieffer (along with John Bardeen and Leon Cooper). Penn professors Carl June and Michael C. Milone at Penn Medicine developed Kymriah, the first FDA-approved CAR T cell therapy for treating certain types of leukemia, approved in August 2017. == Student life == Of those accepted for admission in 2018, 48 percent were Asian, Hispanic, African-American or Native American. Fourteen percent of entering undergraduates in 2018 were international students. The composition of international first-year students in 2018 was: 46% from Asia; 15% from Africa and the Middle East; 16% from Europe; 14% from Canada and Mexico; 8% from the Caribbean, Central America and South America; 5% from Australia and the Pacific Islands. The acceptance rate for international students admission in 2018 was 493 out of 8,316 (6.7%). In 2018, 55% of all enrolled students were women. In the last few decades, Jewish enrollment has been declining. c. 1999 about 28% of the students were Jewish. In early 2020, 1,750 Penn undergraduate students were Jewish, which would be approximately 17% of the some 10,000 undergrads for 2019–20. Penn has been ranked as the number one LGBTQ+ friendly school in the country. Penn's LGBTQ+ center is second oldest in the nation and oldest in Commonwealth of Pennsylvania as it has been serving the LGBTQ+ community since 1979 by providing support and guidance through 25 groups (including Penn J-Bagel a Jewish LGBTQ+ group, the Lambda Alliance a general LGBTQ social organization, and oSTEM a group for LGBTQ people in STEM fields). Penn offers courses in Sexuality and Gender Studies which allows students to discover and learn queer theory, history of sexual norms, and other gender orientation related courses. === Penn Face and behavioral health === The university's social pressure surrounding academic perfection, extreme competitiveness, and nonguaranteed readmission have created what is known as "Penn Face": students put on a façade of confidence and happiness while enduring mental turmoil. Stanford University calls this phenomenon "Duck Syndrome." In recent years, mental health has become an issue on campus with ten student suicides between the years of 2013 to 2016. The school responded by launching a task force. The most widely covered case of Penn Face has been Madison Holleran. In 2018, initiatives were enacted to ameliorate mental health problems, such as requiring sophomores to live on campus and the daily closing of Huntsman Hall at 2:00 a.m. The university's suicide rate was the catalyst for a 2018 state bill, introduced by Governor Tom Wolf, to raise Pennsylvania's standards for university suicide prevention. The university's efforts to address mental health on campus came into the national spotlight again in September 2019 when the director of the university's counseling services died by suicide six months after starting the position. === Student organizations === The Philomathean Society, founded in 1813, is the United States' oldest continuously existing collegiate literary society and continues to host lectures and intellectual events open to the public. The Daily Pennsylvanian is an independent, student-run newspaper, which has been published daily since it was founded in 1885. The newspaper went unpublished from May 1943 to November 1945 due to World War II. In 1984, the university lost all editorial and financial control of The Daily Pennsylvanian (also known as The DP) when the newspaper became its own corporation. The Daily Pennsylvanian has won the Pacemaker Award administered by the Associated Collegiate Press multiple times, most recently in 2019. The DP also publishes a weekly arts and culture magazine called 34th Street Magazine. The Penn Debate Society (PDS), founded in 1984 as the Penn Parliamentary Debate Society, is Penn's debate team, which competes regularly on the American Parliamentary Debate Association and the international British Parliamentary circuit. The Penn History Review is a journal, published twice a year, through the Department of History, for undergraduate historical research, by and for undergraduates, and founded in 1991. ==== Penn Electric Racing ==== Penn Electric Racing is the university's Formula SAE (FSAE) team, competing in the international electric vehicle (EV) competition. Colloquially known as "PER," the team designs, manufactures, and races custom electric racecars against other collegiate teams. In 2015, PER built and raced their first racecar, REV1, at the Lincoln Nebraska FSAE competition, winning first place. The team repeated their success with their next two racecars: REV2 won second place in 2016, and REV3 won first place in 2017. === Performing arts organizations === Penn is home to numerous organizations that promote the arts, from dance to spoken word, jazz to stand-up comedy, theatre, a cappella and more. The Performing Arts Council (PAC) oversees 45 student organizations in these areas. The PAC has four subcommittees: A Cappella Council; Dance Arts Council; Singer, Musicians, and Comedians (SMAC); and Theatre Arts Council (TAC-e). ==== Penn Glee Club ==== The University of Pennsylvania Glee Club, founded in 1862, is tied for fourth oldest continually running glee clubs in the United States and the oldest performing arts group at the University of Pennsylvania. Each year, the Penn Glee Club writes and produces a fully staged, Broadway-style production with an eclectic mix of Penn standards, Broadway classics, classical favorites, and pop hits, highlighting choral singing from all genders The Glee Club draws its singing members from the undergraduate and graduate students. The Penn Glee Club has traveled to nearly all 50 states in the United States and over 40 nations and territories on five continents and has appeared on national television with such celebrities as Bob Hope, Frank Sinatra, Jimmy Stewart, and Ed McMahon. Since its first performance at the White House for President Calvin Coolidge in 1926, the club has sung for numerous heads of state and world leaders. ==== Penn Band ==== The University of Pennsylvania Band has been a part of student life since 1897. The Penn Band presently mainly performs at football and basketball games as well as university functions (e.g. commencement and convocation). It was the first college band to perform at Macy's Thanksgiving Day Parade, and has performed with notable musicians, including John Philip Sousa, members of the Philadelphia Orchestra, and the U.S. Marine Band ("The President's Own"). Penn Band has performed for Princess Grace Kelly of Monaco (sister and aunt to number of alumni), alumnus and District Attorney and Mayor of Philadelphia, and Governor of Pennsylvania Ed Rendell, Vice President Al Gore, presidents Theodore Roosevelt, Lyndon B. Johnson and Ronald Reagan, and Polish dissident and president Lech Wałęsa. ==== Penn's a cappella community ==== The A Cappella Council (ACK) is composed of 14 a cappella groups. Penn's a cappella groups entertain audiences with repertoires including pop, rock, R&B, jazz, Hindi, and Chinese songs. ACK is also home to Off The Beat, which has received the most contemporary a cappella recording awards of any collegiate group in the United States and the most features on the Best of College A Cappella albums. Penn Masala, formed in 1996, is world's oldest and premier South Asian a cappella group based in an American university, which has performed for Barack Obama, Joe Biden, Henry Kissinger, Ban Ki-moon, Farooq Abdullah, Imran Khan, Rajkumar Hirani, A.R. Rahman, Narendra Modi and Sunidhi Chauhan, had their a cappella version of Nazia Hassan's Urdu classic "Aap Jaisa Koi," (originally from the movie Qurbani) sung in the movie American Desi. Penn alumni Elizabeth Banks (class of 1996) and Max Handelman (Banks' husband, class of 1995) invited Masala to appear in Pitch Perfect 2, as Banks reported that Penn's a capella community inspired the film series starring or produced by Banks and Handleman. ==== Comedy organizations ==== Mask and Wig, a club founded in 1889, was (until fall of 2021) the oldest all-male musical comedy troupe in the country. In 2021 the club voted to become gender-inclusive, with auditions open to all undergraduates: male, female, and non-binary. Bloomers comedy group, founded in 1978, is the .".. nation's first collegiate all-women musical and sketch comedy troupe...." Bloomers was founded at Penn by Joan Harrison. In the mid teens, Bloomers revised its constitution to be open to .".. anyone who does not identify as a cisgender man...." and now accepts all persons from under-represented gender identities who perform comedy. Bloomers performs sketches and elaborate shows almost every semester. The comedy troupe is named after bloomers, the once popular long, loose fitting undergarment, gathered at the ankle, worn under a short skirt (developed in the mid 19th century as a healthy comfortable alternative to the heavy, constricting dresses then worn by American women), which were in turn, named after Amelia Jenks Bloomer. Bloomers' most well-known performing alumna is Vanessa Bayer, formerly of Saturday Night Live and is SNL's longest-serving female cast member. === Religious and spiritual organizations === The following religious and spiritual organizations have a significant on campus presence at Penn: (A) Mainstream Protestantism: Dating back to 1857, The Christian Association (a.k.a. The CA), is composed primarily of students from Mainline Protestant backgrounds. Historically, the CA ran several foreign missions including one in China and for decades ran a camp for socio-economically disadvantaged children from Philadelphia. At present the CA occupies part of the parsonage at Tabernacle United Church of Christ. (B) Judaism: Organized Jewish life did not begin on campus in earnest until the start of 20th century. Jewish Life on campus is centered at Penn branch of Hillel International, which inspires students to explore Judaism, creates patterns of Jewish living that can be sustained after graduation, provides religious communities, promotes educational initiatives, social justice projects, social and cultural opportunities, and groups focusing on Israel education and politics, and hosts a Kosher Penn approved dining hall (supervised by the Community Kashrus of Greater Philadelphia). In addition to Hillel, the other major Jewish organization with significant impact on Penn's campus is The Chabad Lubavitch House at Penn (founded in 1980), which, among other activities, brings together Jewish college students with noted Jewish academics for in-depth discussions and debate. (C) Roman Catholicism: The Penn Newman Catholic Center (the Newman Center), founded in 1893 (as the first Newman Center in the country) with the mission of supporting students, faculty, and staff in their religious endeavors. The organization brings prominent Christian figures to campus, including Rev. Thomas "Tom" J. Hagan, OSFS, who worked in the Newman Center and founded Haiti-based non-profit Hands Together; and James Martin SJ (Wharton School undergraduate class of 1982). Father Martin, an editor-at-large of the Jesuit magazine America, and frequent commentator on the life and teachings of Jesus and Ignatian spirituality, is especially well known for his outreach to the LGBT community, which has drawn a strong backlash from parts of the Catholic Church, but has provided comfort to Penn students and other members of Roman Catholic community who wish to stay connected with their faith and identify as LGBQT. (D) Hinduism and Jainism: Penn funds (via the Graduate and Professional Student Assembly or similar undergraduate organization) a variety of official clubs focused on India including a number focused on students who are Hindu or Jain such as: (1) 'Pan-Asian American Community House (PAACH)', a center for students to celebrate South Asian, East Asian, Southeast Asian, culture and religion, (2) 'Rangoli—The South Asian Association at Penn' that educates and informs Penn students (mainly graduate and professional students) with ancestry or interest in South Asia whose goals include a desire to "rekindle the spirit of community" through events, and (3) 'Penn Hindu & Jain Association', a student-run official club at Penn that has 80 to 110 student members and an extensive alumni network, dedicated to raise awareness of the Hindu and Jain faiths and foster further development of these communities in the greater Philadelphia area by providing a variety of services and hosting a number of events such as Holi Festival (which has been held annually at Penn since 1993) and "... aims to be a home to anyone seeking to explore their spiritual, religious, or social interests." (E) Islam: In 1963, the Muslim Students' Association (MSA National) and Penn chapter of MSA National were founded to facilitate Muslim life among students on college campuses. Penn MSA was established to help Penn Muslims build faith and community by fostering a space under the guidance of Islamic principles and towards that goal Penn MSA supports mission of its related umbrella organization, Islamic Society of North America, to "foster the development of the Muslim community, interfaith relations, civic engagement, and better understandings of Islam." The Muslim Life Program at Penn also provides such support and helped cause Penn (in January 2017) to hire its first full-time Muslim chaplain, the co-president of the Association of Campus Muslim Chaplains, Sister Patricia Anton (whose background includes working with Muslim, interfaith, academic and peace-building institutions such as Islamic Society of North America and Islamic Relief). Chaplain Anton's mandate includes supporting and guiding the Penn Muslim community to foster further development of such community by creating a welcoming environment that provides Penn Muslim community opportunities to intellectually and spiritually engage with Islam. Penn also has a residential house, the Muslim Life Residential Program, which provides a live/learn environment focused on the appreciation of Islamic culture, food, history, and practice, and shows its Penn student residents how Islam is deeply integrated in the culture of Philadelphia so they may appreciate how Islam influences daily life. (F) Buddhism: Penn has a Buddhist chaplain (as well as chaplains of other faiths) and funds the Penn Meditation and Buddhism Club, which (1) is dedicated to helping Penn students practice mindfulness and meditation and learning about Buddhism, (2) conducts weekly meetings that begin with a guided meditation and are followed by discussions of topic(s) relating to mindfulness and Buddhism, and (3) organizes other activities such as ramen nights and weekend meditation retreats to the local Won Buddhism center. == Athletics == Penn's sports teams are nicknamed the Quakers, but the teams are often also referred to as The Red and Blue as reflected in the popular song sung after every athletic contest where the Penn Band or other musical groups are present. The athletes participate in the Ivy League and Division I (Division I FCS for football) in the NCAA. In recent decades, they often have been league champions in football (14 times from 1982 to 2010) and basketball (22 times from 1970 to 2006). The first athletic team at Penn was the cricket team, which formed in 1842 and played regularly through 1846, the year it lost its "grounds," and then only played intermittently until 1864, the year it played its first intercollegiate game (against Haverford College). The rowing (or crew) team composed of Penn students but not officially representing Penn was formed in 1854 but did not compete against other colleges as official part of Penn until 1879. The rugby football team began to play against other colleges, most notably against College of New Jersey (now Princeton University) in 1874 using a combination of association football (i.e. soccer) and rugby rules (the twenty players on each side were able to use their hands but were not able to pass or bat the ball forward). === Baseball === The University of Pennsylvania's first baseball team was fielded in 1875. Penn has won four championships in the Eastern Intercollegiate Baseball League, a baseball-only conference that existed from 1930 to 1992, which consisted of the eight Ivy League schools and Army and Navy. Since 1992, Penn baseball has claimed an Ivy League title, advancing to the NCAA Division I Baseball Championship five times. === Basketball === Penn basketball is steeped in tradition. Penn was retroactively recognized as the pre-NCAA tournament national champion for the 1919–20 and 1920–21 seasons by the Helms Athletic Foundation and for the 1919–20 season by the Premo-Porretta Power Poll. Penn made its only (and the Ivy League's second) Final Four appearance in 1979, where the Quakers lost to Magic Johnson-led Michigan State in Salt Lake City. (Dartmouth twice finished second in the tournament in the 1940s, but that was before the beginning of formal League play.) Penn's team is also a member of the Philadelphia Big 5, along with La Salle, Saint Joseph's, Temple, Villanova, and Drexel. In 2007, the men's team won its third consecutive Ivy League title and then lost in the first round of the NCAA Tournament to Texas A&M. Penn last made the NCAA tournament in 2018 where it lost to top seeded Kansas. === Cricket === The first University of Pennsylvania cricket team, reported to be the first cricket team in the United States composed exclusively of Americans, was organized in 1842. On May 7, 1864, Penn played its first intercollegiate game against Haverford College (the 3rd oldest intercollegiate athletic contest after Harvard Yale 1852 crew race and Amherst Williams 1859 Baseball game). After Penn moved west of the Schuylkill River in 1872, Penn played cricket at one of the local clubs, Belmont Cricket Club, Merion Cricket Club, Germantown Cricket Club, or at Haverford College. Beginning in 1875 and through 1880, Penn fielded a varsity eleven, which played a few matches each year against opponents that included Haverford College and Columbia College. In 1881, Penn, Harvard College, Haverford College, Princeton College (then known as College of New Jersey), and Columbia College formed the Intercollegiate Cricket Association, which Cornell University later joined. Penn won The Intercollegiate Cricket Association championship, the de facto national championship, 23 times (18 solo, three shared with Haverford and Harvard, one shared with Haverford and Cornell, and one shared with just Haverford) during the 44 years that The Intercollegiate Cricket Association existed from 1881 through 1924. In the 1890s, Penn's cricket team frequently toured Canada and the British Isles. Perhaps the university's most famous cricket player was George Patterson (class of 1888), who still holds the North American batting record and who went on to play for the professional Philadelphia Cricket Team. Following the World War I, cricket began to experience a serious decline, such that in 1924 Penn fielded its last team in the twentieth century. Starting in 2009, however, Penn once again fielded a cricket team, albeit club, that ended up being the first winner of a tournament for teams from the Ivies. === Curling === University of Pennsylvania Curling Club qualified for the 2023 National Championship at 6th place, the same ranking they qualified for the 2022 National Championship (where they finished in 2nd place), but in 2023 the team won the national championship by defeating arch rival Princeton University in the championship match (6 to 3). Penn Curling also won the National Championship in 2016 and is the only East Coast team to have won the Curling National Championship. === Football === Penn first fielded a football team against Princeton at the Germantown Cricket Club in Philadelphia on November 11, 1876. During the 1890s, Penn's coach and alumnus George Washington Woodruff introduced the quarterback kick, a forerunner of the forward pass, as well as the place-kick from scrimmage and the delayed pass. The achievements of two of Penn's other outstanding players from that era, John Heisman, a Law School alumnus, and John Outland, a Penn Med alumnus, are remembered each year with the presentation of the Heisman Trophy to the most outstanding college football player of the year, and the Outland Trophy to the most outstanding college football interior lineman of the year. The Bednarik Award, named for Chuck Bednarik, a three-time All-American center and linebacker who starred on the 1947, is awarded annually to college football's best defensive player. Bednarik went on to play for 12 years with the Philadelphia Eagles, and was elected to the Pro Football Hall of Fame in 1969. Penn's game against University of California, Berkeley on September 29, 1951, in front of a crowd of 60,000 at Franklin Field, was first college football game to be broadcast in color. === Ice hockey === Penn's first ice hockey team competed during the 1896–97 academic year, and joined the nascent Intercollegiate Hockey Association (IHA) in 1898–99. On the first team in 1896–97 were several players of Canadian background, among them middle-distance runner and Olympian George Orton (the first disabled person to compete in the Olympics). Penn fielded teams intermittently until 1965 when it formed a varsity squad that was terminated in 1977. Penn now fields a club team that plays in the American Collegiate Hockey Association Division II, is a member of the Colonial States College Hockey Conference, and continues to play at the Class of 1923 Arena in Philadelphia. === Olympic athletes === At least 43 Penn alumni have earned 81 Olympic medals (26 gold). Penn won more of its "medals" (which were actually cups, trophies, or plaques, as medals were not introduced until a later Olympics) at 1900 Summer Olympics held in Paris than at any other Olympics. In the 2024 Summer Olympics in Paris, 13 Penn present students or alumni participated in 5 sports (athletics [4], breaking [1], fencing [3], rowing [4], and swimming [1] for 7 countries (Australia [1], Bermuda [1], Canada [2], Egypt [1], Nigeria [1], Slovenia [1], and USA [6]) === Rowing === Rowing at Penn dates back to at least 1854 with the founding of the University Barge Club. The university currently hosts both heavyweight and lightweight men's teams and an open weight women's team, all of which compete as part of the Eastern Sprints League. Ellis Ward was Penn's first intercollegiate crew coach from 1879 through 1912. During the course of Ward's coaching career at Penn his .".. Red and Blue crews won 65 races, in about 150 starts." Ward coached Penn's 8-oared boat to the finals of the Grand Challenge Cup (the oldest and most prized trophy) at the Henley Royal Regatta (but in that final race was defeated by the champion Leander Club). Penn Rowing has produced a long list of famous coaches and Olympians. Members of Penn crew team, rowers Sidney Jellinek, Eddie Mitchell, and coxswain, John G. Kennedy, won the bronze medal for the United States at 1924 Olympics. Joe Burk (class of 1935) was captain of Penn crew team, winner of the Henley Diamond Sculls twice, named recipient of the James E. Sullivan Award for nation's best amateur athlete in 1939, and Penn coach from 1950 to 1969. The 1955 Men's Heavyweight 8, coached by Joe Burk, became one of only four American university crews in history to win the Grand Challenge Cup at the Henley Royal Regatta. The outbreak of World War Two canceled the 1940 Olympics for which Burk was favored to win the gold medal. Other Penn Olympic athletes and or Penn coaches of such athletes include: (a) John Anthony Pescatore (who competed in the 1988 Seoul Olympic Games for the United States as stroke of the men's coxed eight which earned a bronze medal and later competed at the 1992 Barcelona Olympic Games in the men's coxless pair), (b) Susan Francia (winner of gold medals as part of the women's 8 oared boat at 2008 Olympics and 2012 Olympics), (c) Regina Salmons (member of 2021 USA team), (d) Rusty Callow, (e) Harry Parker, (f) Ted Nash, and (g) John B. Kelly Jr., son of John B. Kelly Sr. (winner of three medals at 1920 Summer Olympics) and brother of Princess Grace of Monaco, was the second Penn Crew alumnus to win the James E. Sullivan Award for being nation's best amateur athlete (in 1947), who was winner of a bronze medal at the 1956 Summer Olympics). Penn men's crew team won the National Collegiate Rowing Championship in 1991. A member of that team, Janusz Hooker (Wharton School class of 1992) won the bronze medal in Men's Quadruple Sculls for Australia at the 1996 Summer Olympics. The Penn teams presently row out of College Boat Club, No. 11 Boathouse Row. === Rugby === The Penn men's rugby football team is one of the oldest collegiate rugby teams in the United States. Penn first fielded a team in mid-1870s playing by rules much closer to the rugby union and association football code rules relative to American football rules (as such American football rules had not yet been invented). Among its earliest games was a game against the College of New Jersey, which became Princeton in 1895, played in Philadelphia on Saturday, November 11, 1876, which was less than two weeks before Princeton met on November 23, 1876, with Harvard and Columbia to confirm that all their games would be played using the rugby union rules. Princeton and Penn played their November 1876 game per a combination of rugby (there were 20 players per side and players were able to touch the ball with their hands) and Association football codes. The rugby code influence was due, in part, to the fact that some of their students had been educated in English public schools. Among the prominent alumni to play in a 19th-century version of rugby in which rules then did not allow forward passes or center snaps was John Heisman, namesake of the Heisman Trophy and an 1892 graduate of the University of Pennsylvania Law School. Heisman was instrumental in the first decade of the 20th century in changing the rules to more closely relate to the present rules of American football. One of Heisman's teammates (who was unanimously voted Captain in the fall after Heisman graduated) was Harry Arista Mackey, Penn Law class of 1893 (who subsequently served as Mayor of Philadelphia from 1928 to 1932). In 1906, Rugby per Rugby Union code was reintroduced to Penn (as Penn last played per Rugby Union Code in 1882 as Penn played rugby per a number of different rugby football rulebooks and codes from 1883 through 1890s) by Frank Villeneuve Nicholson (Penn Dental School (class of 1910)), who in 1904 had captained the Australian national rugby team in its match against England. Penn played per rugby union code rules at least through 1912, contemporaneously with Penn playing American gridiron football. Evidence of such may be found in an October 22, 1910, Daily Pennsylvanian article (quoted below) and a yearbook photo that rugby per rugby union code was played.Such is the devotion to English rugby football on the part of University of Pennsylvania's students from New Zealand, Australia, and England that they meet on Franklin Field at 7 o'clock every morning and practice the game. The varsity track and football squads monopolize the field to such an extent that the early hours of the morning are the only ones during which the rugby enthusiasts can play. Any time except Friday, Saturday and Sunday, a squad of 25 men may be seen running through the hardest kind of practice after which they may divide into two teams and play a hard game. Once a week, captain CC Walton, ('11), dental, who hails from New Zealand, gives the enthusiastic players a blackboard talk in which he explains the intricacies of the game in detail. The player-coach of United States Olympic gold-winning rugby team at the 1924 Summer Olympics was Alan Valentine, who played rugby while at Penn (which he attended during 1921/1922 academic year) as he was getting a master's degree at Wharton. Though Penn played rugby per rugby union rules from 1929 through 1934, there is no indication that Penn had a rugby team from 1935 through 1959 when Penn men's rugby became permanent due to leadership of Harry "Joe" Edwin Reagan III Penn's College class of 1962 and Penn Law class of 1965, who also went onto help create and incorporate (in 1975) and was Treasurer (in 1981) of USA Rugby and Oreste P. "Rusty" D'Arconte Penn's College class of 1966. Thus, with D'Arconte's hustle and Reagan's charisma and organizational skills, a team, which had fielded a side of fifteen intermittently from 1912 through 1960, became permanent. In spring of 1984, Penn women's rugby, led by Social Chair Tamara Wayland (College class of 1985, who subsequently became the women's representative to and vice president of USA Rugby South from 1996 to 1998); club president Marianne Seligson; and Penn Law student Gigi Sohn, began to compete. Penn women's rugby team is coached, as of 2020, by (a) Adam Dick, a 300-level certified coach with over 15 years of rugby coaching experience including being the first coach of the first women's rugby team at the University of Arizona and who was a four-year starter at University of Arizona men's first XV rugby team and (b) Philly women's player Kate Hallinan. Penn's men's rugby team plays in the Ivy Rugby Conference and have finished as runners-up in both 15s and 7s in the Conference and won the Ivy Rugby Tournament in 1992. As of 2011, the club uses the state-of-the-art facilities at Penn Park. The Penn Quakers' rugby team played on national TV at the 2013 Collegiate Rugby Championship, a college rugby tournament that for number of years had been played each June at Subaru Park in Philadelphia, and was broadcast live on NBC. In their inaugural appearance in the tournament, the Penn men's rugby team won the Shield Competition, beating local Big Five rival, Temple University, 17–12 in the final. In the semifinal match of that Shield Competition, Penn Rugby became the first Philadelphia team to beat a non-Philadelphia team in CRC history, with a 14–12 win over the University of Texas. As of 2020, Penn men's rugby team is coached by Tiger Bax, a former professional rugby player hailing from Cape Town, South Africa, whose playing experience includes stints in the Super Rugby competition with the Stormers (15s) and Mighty Mohicans (7s), as well as with the Gallagher Premiership Rugby side, Saracens and whose coaching experience includes three successful years as coach at Valley Rugby Football Club in Hong Kong; and Tyler May, from Cherry Hill, New Jersey, who played rugby at Pennsylvania State University where he was a first XV player for three years. Penn's graduate business and law schools also fielded rugby teams. The Wharton rugby team has competed from 1978 to the present. The Penn Law Rugby team (1985 through 1993) counts among its alumni Walter Joseph Jay Clayton, III Penn Law class of 1993, and chair of the U.S. Securities and Exchange Commission from May 4, 2017, until December 23, 2020, Raymond Hulser, former Chief of Public Integrity Section of United States Department of Justice (who also was hired by DOJ special counsel Jack Smith to investigate the alleged mishandling by former President Donald J. Trump of certain top secret documents), and Magistrate Judge Bruce Reinhart who approved the search of Mar-a-Lago, the residence of current U.S. president Donald Trump in Palm Beach, Florida. Undergraduate Penn Rugby Alumni include (1) Conor Lamb (Penn College class of 2006 and Penn Law class of 2009), who played for undergraduate team, and, as of 2021, is a member of United States House of Representatives, elected originally to Pennsylvania's 18th congressional district, since 2019 is a U.S. Representative from Pennsylvania's 17th congressional district and (2) Argentina's richest person, Marcos Galperin (Wharton Undergraduate Class of 1994), a premier player on the 1992 Ivy League Tournament championship team, who founded Mercado Libre, an online marketplace dedicated to e-commerce and online auction, which, as of 2016, is the most popular e-commerce site in South America by number of visitors. === Facilities === Franklin Field, with a present seating capacity of 52,593, is where the Quakers play football, lacrosse, sprint football and track and field (and formerly played baseball, field hockey, soccer, and rugby). It is the oldest stadium still operating for college football games, first stadium to sport two tiers, first stadium in the country to have a scoreboard, second stadium to have a radio broadcast of football, first stadium from which a commercially televised football game was broadcast, and first stadium from which college football game was broadcast in color. Franklin Field also played host to the Philadelphia Eagles from 1958 to 1970. Since 1895, Franklin Field has hosted the annual collegiate track and field event "the Penn Relays," which is the oldest and largest track and field competition in the United States. Penn's Palestra is home gym of the Penn Quakers men's and women's basketball and volleyball teams, wrestling team, Philadelphia Big Five basketball, and other high school and college sporting events, and is located mere yards from Franklin Field. The Palestra has been called "the most important building in the history of college basketball" and "changed the entire history of the sport for which it was built". The Palestra has hosted more NCAA Tournament basketball games than any other facility. Penn's River Fields hosts a number of athletic fields including the Rhodes Soccer Stadium, the Ellen Vagelos C'90 Field Hockey Field, and Irving "Moon" Mondschein Throwing Complex. Penn baseball plays its home games at Meiklejohn Stadium at Murphy Field. Penn's Class of 1923 Arena (with seating for up to 3,000 people) was built to host the University of Pennsylvania Varsity Ice Hockey Team, which has been disbanded, and now hosts or in the past hosted: Penn's Men's and Penn Women's club ice hockey teams, practices or exhibition games for the Philadelphia Flyers, Colorado Avalanche and Carolina Hurricanes, roller hockey for the Philadelphia Bulldogs professional team, and rock concerts such as one in 1982 featuring Prince. == People == === Notable people === Penn alumni, faculty and trustees include those who have distinguished themselves in the sciences, academia, politics, business, military, sports, arts, and media. Penn alumni include two presidents of the United States: Donald Trump and William Henry Harrison, (and eight presidents who were awarded honorary doctorate degrees by Penn). Of the presidents who were awarded the honorary doctorates by Penn, five were awarded prior to them becoming president (Washington, Taft, Wilson, Hoover, and Eisenhower) and three were awarded while they were president (Garfield and both Roosevelts). Nine foreign heads of state attended Penn (including former prime minister of the Philippines, Cesar Virata; the first president of Nigeria, Nnamdi Azikiwe; the first president of Ghana, Kwame Nkrumah; and the current president of Ivory Coast, Alassane Ouattara). Prior to becoming president of the United States, Joe Biden was a Benjamin Franklin Presidential Practice Professor at University of Pennsylvania, where he led the Penn Biden Center for Diplomacy and Global Engagement, a center focused principally on diplomacy, foreign policy, and national security. Penn alumni or faculty also include three United States Supreme Court justices: William J. Brennan, Owen J. Roberts, and James Wilson and at least four Supreme Court justices of foreign nations, (including Ronald Wilson of the High Court of Australia, Ayala Procaccia of the Israel Supreme Court, Yvonne Mokgoro, former justice of the Constitutional Court of South Africa, and Irish Court of Appeal justice Gerard Hogan). Since its founding, Penn alumni, trustees, and faculty have included eight Founding Fathers of the United States who signed the Declaration of Independence, seven who signed the United States Constitution, and 24 members of the Continental Congress. Penn alumni also include 32 U.S. senators, 163 members of the U.S. House of Representatives, 19 U.S. Cabinet Secretaries, 46 governors, 28 State Supreme Court justices. Penn alumni, trustees and or faculty have served in every Congress since the first in 1789 and have represented 26 different states. Penn alumni in business, finance and investment banking include Warren Buffett (CEO of Berkshire Hathaway), Elon Musk (co-founder of PayPal, Tesla, OpenAI and Neuralink, founder of SpaceX, The Boring Company and xAI), Sundar Pichai (CEO of Alphabet and Google), Peter Lynch (former manager of the Fidelity Magellan Fund), and other high-profile figures on Wall Street. Penn alumni who received federal aid, 10 years after starting at Penn, have the highest median incomes among alumni of Ivy League schools. Penn has the largest number of undergraduate alumni (36) who are billionaires (with combined wealth of $367 billion—also the largest number among colleges and universities in the US). Penn alumni have won 53 Tony Awards, 17 Grammy Awards, 25 Emmy Awards, 13 Oscars, and 1 EGOT (John Legend). Penn alumni have also had a significant impact on the United States military as they include Samuel Nicholas, "founder" of United States Marine Corps and William A. Newell, whose congressional action formed a predecessor to the current United States Coast Guard, and numerous alumni have become generals or similar rank in the United States Armed Forces. At least two Penn alumni have been NASA astronauts, and five Penn alumni have been awarded the Medal of Honor. As of 2023, there have been 38 Nobel laureates affiliated (see List of Nobel laureates by university affiliation) with the University of Pennsylvania. At least 43 different Penn alumni have earned 81 Olympic medals (26 gold). Penn's alumni also include poets Ezra Pound and William Carlos Williams, civil rights leader Martin Luther King, Jr., linguist and political theorist Noam Chomsky, architect Louis Kahn, cartoonist Charles Addams, actresses Candice Bergen and Elizabeth Banks. === Alumni organizations === Penn has over 120 international alumni clubs in 52 countries and 37 states, which offer opportunities for alumni to reconnect, participate in events, and work on collaborative initiatives. In addition, in 1989, Penn bought a 14-story clubhouse building (purpose-built for Yale Club) in New York City from Touro College for $15 million to house Penn's largest alumni chapter. After raising a separate $25 million (including $150,000+ donations each from such alumni as Estee Lauder heirs Leonard Lauder and Ronald Lauder, Saul Steinberg, Michael Milken, Donald Trump, and Ronald Perelman) and two years of renovation, the Penn Club of New York moved to its current location at 30 West 44th Street on NYC's Clubhouse Row. == See also == Education in Philadelphia List of universities by number of billionaire alumni Think Tanks and Civil Societies Program (TTCSP) University of Pennsylvania Press == Notes == == References == == External links == Official website University of Pennsylvania athletics website
https://en.wikipedia.org/wiki/University_of_Pennsylvania
Brainfuck is an esoteric programming language created in 1993 by Swiss student Urban Müller. Designed to be extremely minimalistic, the language consists of only eight simple commands, a data pointer, and an instruction pointer. Brainfuck is an example of a so-called Turing tarpit: it can be used to write any program, but it is not practical to do so because it provides so little abstraction that the programs get very long or complicated. While Brainfuck is fully Turing-complete, it is not intended for practical use but to challenge and amuse programmers. Brainfuck requires one to break down commands into small and simple instructions. The language takes its name from the slang term brainfuck, which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming. Because the language's name contains profanity, many substitutes are used, such as brainfsck, branflakes, brainoof, brainfrick, BrainF, and BF. == History == Müller designed Brainfuck with the goal of implementing the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in Motorola 68000 assembly on the Amiga and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a "Readme" file, which briefly described the language, and challenged the reader "Who can program anything useful with it? :)". Müller also included an interpreter and some examples. A second version of the compiler used only 240 bytes. == Language design == The language consists of eight commands. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command. The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as a one-dimensional array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding). The eight language commands each consist of a single character: [ and ] match as parentheses usually do: each [ matches exactly one ] and vice versa, the [ comes first, and there can be no unmatched [ or ] between the two. Brainfuck programs are usually difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands and partly because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing-complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model if given access to an unlimited amount of memory and time. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. Brainfuck interpreters written in the Brainfuck language itself also exist. == Examples == === Adding two values === As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0. This can be incorporated into a simple addition program as follows: === Hello World! === The following program prints "Hello World!" and a newline to the screen: For readability, this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as: === ROT13 === This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65–77) to N-Z (78–90), and vice versa. Also it must map a-m (97–109) to n-z (110–122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates. === Simulation of abiogenesis === In 2024, a Google research project used a slightly modified 10-command version of Brainfuck as the basis of an artificial digital environment. In this environment, they found that replicators arose naturally and competed with each other for domination of the environment. == See also == JSFuck – an esoteric subset of the JavaScript programming language with a very limited set of characters == Notes == == References == == External links == Official website
https://en.wikipedia.org/wiki/Brainfuck
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. == Optimization problems == Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set. A problem with continuous variables is known as a continuous optimization, in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems. An optimization problem can be represented in the following way: Given: a function f : A → R {\displaystyle \mathbb {R} } from some set A to the real numbers Sought: an element x0 ∈ A such that f(x0) ≤ f(x) for all x ∈ A ("minimization") or such that f(x0) ≥ f(x) for all x ∈ A ("maximization"). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid: f ( x 0 ) ≥ f ( x ) ⇔ − f ( x 0 ) ≤ − f ( x ) , {\displaystyle f(\mathbf {x} _{0})\geq f(\mathbf {x} )\Leftrightarrow -f(\mathbf {x} _{0})\leq -f(\mathbf {x} ),} it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, A is some subset of the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space or the choice set, while the elements of A are called candidate solutions or feasible solutions. The function f is variously called an objective function, criterion function, loss function, cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. A local minimum x* is defined as an element for which there exists some δ > 0 such that ∀ x ∈ A where ‖ x − x ∗ ‖ ≤ δ , {\displaystyle \forall \mathbf {x} \in A\;{\text{where}}\;\left\Vert \mathbf {x} -\mathbf {x} ^{\ast }\right\Vert \leq \delta ,\,} the expression f(x*) ≤ f(x) holds; that is to say, on some region around x* all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. == Notation == Optimization problems are often expressed with special notation. Here are some examples: === Minimum and maximum value of a function === Consider the following notation: min x ∈ R ( x 2 + 1 ) {\displaystyle \min _{x\in \mathbb {R} }\;\left(x^{2}+1\right)} This denotes the minimum value of the objective function x2 + 1, when choosing x from the set of real numbers R {\displaystyle \mathbb {R} } . The minimum value in this case is 1, occurring at x = 0. Similarly, the notation max x ∈ R 2 x {\displaystyle \max _{x\in \mathbb {R} }\;2x} asks for the maximum value of the objective function 2x, where x may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". === Optimal input arguments === Consider the following notation: a r g m i n x ∈ ( − ∞ , − 1 ] x 2 + 1 , {\displaystyle {\underset {x\in (-\infty ,-1]}{\operatorname {arg\,min} }}\;x^{2}+1,} or equivalently a r g m i n x x 2 + 1 , subject to: x ∈ ( − ∞ , − 1 ] . {\displaystyle {\underset {x}{\operatorname {arg\,min} }}\;x^{2}+1,\;{\text{subject to:}}\;x\in (-\infty ,-1].} This represents the value (or values) of the argument x in the interval (−∞,−1] that minimizes (or minimize) the objective function x2 + 1 (the actual minimum value of that function is not what the problem asks for). In this case, the answer is x = −1, since x = 0 is infeasible, that is, it does not belong to the feasible set. Similarly, a r g m a x x ∈ [ − 5 , 5 ] , y ∈ R x cos ⁡ y , {\displaystyle {\underset {x\in [-5,5],\;y\in \mathbb {R} }{\operatorname {arg\,max} }}\;x\cos y,} or equivalently a r g m a x x , y x cos ⁡ y , subject to: x ∈ [ − 5 , 5 ] , y ∈ R , {\displaystyle {\underset {x,\;y}{\operatorname {arg\,max} }}\;x\cos y,\;{\text{subject to:}}\;x\in [-5,5],\;y\in \mathbb {R} ,} represents the {x, y} pair (or pairs) that maximizes (or maximize) the value of the objective function x cos y, with the added constraint that x lie in the interval [−5,5] (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form {5, 2kπ} and {−5, (2k + 1)π}, where k ranges over all integers. Operators arg min and arg max are sometimes also written as argmin and argmax, and stand for argument of the minimum and argument of the maximum. == History == Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum. The term "linear programming" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (Programming in this context does not refer to computer programming, but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on the theoretical aspects of linear programming (like the theory of duality) around the same time. Other notable researchers in mathematical optimization include the following: == Major subfields == Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming. Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is bounded. Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs. Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming. Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone. Geometric programming is a technique whereby objective and inequality constraints expressed as posynomials and equality constraints as monomials can be transformed into a convex program. Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming. Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem. Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it. Stochastic programming studies the case in which some of the constraints or parameters depend on random variables. Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set. Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process. Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions. Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems. Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning). Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints. Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling. Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model. In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area. Optimal control theory is a generalization of the calculus of variations which introduces control policies. Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation. Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities. === Multi-objective optimization === Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering. === Multi-modal or global optimization === Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing. == Classification of critical points and extrema == === Feasibility problem === The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative. === Existence === The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. === Necessary conditions for optimality === One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'. === Sufficient conditions for optimality === While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. === Sensitivity and continuity of optima === The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics. The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters. === Calculus of optimization === For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum. Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point. Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems. When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods. === Global convergence === More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. == Computational optimization techniques == To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge). === Optimization algorithms === Simplex algorithm of George Dantzig, designed for linear programming Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming Variants of the simplex algorithm that are especially suited for network optimization Combinatorial algorithms Quantum optimization algorithms === Iterative methods === The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Methods that evaluate Hessians (or approximate Hessians, using finite differences): Newton's method Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems. Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians. Methods that evaluate gradients, or approximate gradients in some way (or even subgradients): Coordinate descent methods: Algorithms which update a single coordinate in each iteration Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.) Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems. Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods. Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods). Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods. Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems). Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000). Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation. Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used. Interpolation methods Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below. Mirror descent === Heuristics === Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: == Applications == === Mechanics === Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems. This approach may be applied in cosmology and astrophysics. === Economics and finance === Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments. === Electrical engineering === Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis. === Civil engineering === Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization. === Operations research === Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods. === Control engineering === Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled. === Geophysics === Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used. === Molecular modeling === Nonlinear optimization methods are widely used in conformational analysis. === Computational systems biology === Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways. === Machine learning === == Solvers == == See also == == Notes == == Further reading == Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. ISBN 0-521-83378-7. Gill, P. E.; Murray, W.; Wright, M. H. (1982). Practical Optimization. London: Academic Press. ISBN 0-12-283952-8. Lee, Jon (2004). A First Course in Combinatorial Optimization. Cambridge University Press. ISBN 0-521-01012-8. Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin: Springer. ISBN 0-387-30303-0. G.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (eds.): Optimization, Elsevier, (1989). Stanislav Walukiewicz:Integer Programming, Springer,ISBN 978-9048140688, (1990). R. Fletcher: Practical Methods of Optimization, 2nd Ed., Wiley, (2000). Panos M. Pardalos:Approximation and Complexity in Numerical Optimization: Continuous and Discrete Problems, Springer,ISBN 978-1-44194829-8, (2000). Xiaoqi Yang, K. L. Teo, Lou Caccetta (Eds.):Optimization Methods and Applications,Springer, ISBN 978-0-79236866-3, (2001). Panos M. Pardalos, and Mauricio G. C. Resende(Eds.):Handbook of Applied Optimization、Oxford Univ Pr on Demand, ISBN 978-0-19512594-8, (2002). Wil Michiels, Emile Aarts, and Jan Korst: Theoretical Aspects of Local Search, Springer, ISBN 978-3-64207148-5, (2006). Der-San Chen, Robert G. Batson, and Yu Dang: Applied Integer Programming: Modeling and Solution,Wiley,ISBN 978-0-47037306-4, (2010). Mykel J. Kochenderfer and Tim A. Wheeler: Algorithms for Optimization, The MIT Press, ISBN 978-0-26203942-0, (2019). Vladislav Bukshtynov: Optimization: Success in Practice, CRC Press (Taylor & Francis), ISBN 978-1-03222947-8, (2023) . Rosario Toscano: Solving Optimization Problems with the Heuristic Kalman Algorithm: New Stochastic Methods, Springer, ISBN 978-3-031-52458-5 (2024). Immanuel M. Bomze, Tibor Csendes, Reiner Horst and Panos M. Pardalos: Developments in Global Optimization, Kluwer Academic, ISBN 978-1-4419-4768-0 (2010). == External links == "Decision Tree for Optimization Software". Links to optimization source codes "Global optimization". "EE364a: Convex Optimization I". Course from Stanford University. Varoquaux, Gaël. "Mathematical Optimization: Finding Minima of Functions".
https://en.wikipedia.org/wiki/Mathematical_optimization
N-version programming (NVP), also known as multiversion programming or multiple-version dissimilar software, is a method or process in software engineering where multiple functionally equivalent programs are independently generated from the same initial specifications. The concept of N-version programming was introduced in 1977 by Liming Chen and Algirdas Avizienis with the central conjecture that the "independence of programming efforts will greatly reduce the probability of identical software faults occurring in two or more versions of the program". The aim of NVP is to improve the reliability of software operation by building in fault tolerance or redundancy. == NVP approach == The general steps of N-version programming are: An initial specification of the intended functionality of the software is developed. The specification should unambiguously define: functions, data formats (which include comparison vectors, c-vectors, and comparison status indicators, cs-indicators), cross-check points (cc-points), comparison algorithm, and responses to the comparison algorithm. From the specifications, two or more versions of the program are independently developed, each by a group that does not interact with the others. The implementations of these functionally equivalent programs use different algorithms and programming languages. At various points of the program, special mechanisms are built into the software which allow the program to be governed by the N-version execution environment (NVX). These special mechanisms include: comparison vectors (c-vectors, a data structure representing the program's state), comparison status indicators (cs-indicators), and synchronization mechanisms. The resulting programs are called N-version software (NVS). Some N-version execution environment (NVX) is developed which runs the N-version software and makes final decisions of the N-version programs as a whole given the output of each individual N-version program. The implementation of the decision algorithms can vary ranging from simple as accepting the most frequently occurring output (for instance, if a majority of versions agree on some output, then it is likely to be correct) to some more complex algorithm. == Criticisms == Researchers have argued that different programming teams can make similar mistakes. In 1986, Knight & Leveson conducted an experiment to evaluate the assumption of independence in NVP, they found that the assumption of independence of failures in N-version programs failed statistically. The weakness of an NVP program lies in the decision algorithm. The question of correctness of an NVP program depends partially on the algorithm the NVX uses to determine what output is "correct" given the multitude of outputs by each individual N-version program. In theory, output from multiple independent versions is more likely to be correct than output from a single version. However, there is debate whether or not the improvements of N-version development is enough to warrant the time, additional requirements, and costs of using the NVP method. In particular, under certain models of reliability and design effort, it has been shown that improvements due to using NVP are less than if all of the effort was concentrated on improving the reliability of a single version. “There has been considerable debate as to realizing the full potential from n-version programming as it makes the assumption that the independence will lead to statistically independent mistakes. Evidence has shown that this premise may be faulty [12].” [1] == Applications == N-version programming has been applied to software in switching trains, performing flight control computations on modern airliners, electronic voting (the SAVE System), and the detection of zero-day exploits, among other uses. == See also == Redundancy (engineering) Triple modular redundancy Data redundancy Fault tolerant design Reliability engineering Safety engineering == References == == External links == N-version programming in the RKBExplorer
https://en.wikipedia.org/wiki/N-version_programming
Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical (i.e. suggesting a theory to explain observed facts) rather than mathematical (i.e. proving a property for all members of a well-ordered set) induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples. Schema: positive examples + negative examples + background knowledge ⇒ hypothesis. Inductive logic programming is particularly useful in bioinformatics and natural language processing. == History == Building on earlier work on Inductive inference, Gordon Plotkin was the first to formalise induction in a clausal setting around 1970, adopting an approach of generalising from examples. In 1981, Ehud Shapiro introduced several ideas that would shape the field in his new approach of model inference, an algorithm employing refinement and backtracing to search for a complete axiomatisation of given examples. His first implementation was the Model Inference System in 1981: a Prolog program that inductively inferred Horn clause logic programs from positive and negative examples. The term Inductive Logic Programming was first introduced in a paper by Stephen Muggleton in 1990, defined as the intersection of machine learning and logic programming. Muggleton and Wray Buntine introduced predicate invention and inverse resolution in 1988. Several inductive logic programming systems that proved influential appeared in the early 1990s. FOIL, introduced by Ross Quinlan in 1990 was based on upgrading propositional learning algorithms AQ and ID3. Golem, introduced by Muggleton and Feng in 1990, went back to a restricted form of Plotkin's least generalisation algorithm. The Progol system, introduced by Muggleton in 1995, first implemented inverse entailment, and inspired many later systems. Aleph, a descendant of Progol introduced by Ashwin Srinivasan in 2001, is still one of the most widely used systems as of 2022. At around the same time, the first practical applications emerged, particularly in bioinformatics, where by 2000 inductive logic programming had been successfully applied to drug design, carcinogenicity and mutagenicity prediction, and elucidation of the structure and function of proteins. Unlike the focus on automatic programming inherent in the early work, these fields used inductive logic programming techniques from a viewpoint of relational data mining. The success of those initial applications and the lack of progress in recovering larger traditional logic programs shaped the focus of the field. Recently, classical tasks from automated programming have moved back into focus, as the introduction of meta-interpretative learning makes predicate invention and learning recursive programs more feasible. This technique was pioneered with the Metagol system introduced by Muggleton, Dianhuan Lin, Niels Pahlavi and Alireza Tamaddoni-Nezhad in 2014. This allows ILP systems to work with fewer examples, and brought successes in learning string transformation programs, answer set grammars and general algorithms. == Setting == Inductive logic programming has adopted several different learning settings, the most common of which are learning from entailment and learning from interpretations. In both cases, the input is provided in the form of background knowledge B, a logical theory (commonly in the form of clauses used in logic programming), as well as positive and negative examples, denoted E + {\textstyle E^{+}} and E − {\textstyle E^{-}} respectively. The output is given as a hypothesis H, itself a logical theory that typically consists of one or more clauses. The two settings differ in the format of examples presented. === Learning from entailment === As of 2022, learning from entailment is by far the most popular setting for inductive logic programming. In this setting, the positive and negative examples are given as finite sets E + {\textstyle E^{+}} and E − {\textstyle E^{-}} of positive and negated ground literals, respectively. A correct hypothesis H is a set of clauses satisfying the following requirements, where the turnstile symbol ⊨ {\displaystyle \models } stands for logical entailment: Completeness: B ∪ H ⊨ E + Consistency: B ∪ H ∪ E − ⊭ false {\displaystyle {\begin{array}{llll}{\text{Completeness:}}&B\cup H&\models &E^{+}\\{\text{Consistency: }}&B\cup H\cup E^{-}&\not \models &{\textit {false}}\end{array}}} Completeness requires any generated hypothesis H to explain all positive examples E + {\textstyle E^{+}} , and consistency forbids generation of any hypothesis H that is inconsistent with the negative examples E − {\textstyle E^{-}} , both given the background knowledge B. In Muggleton's setting of concept learning, "completeness" is referred to as "sufficiency", and "consistency" as "strong consistency". Two further conditions are added: "Necessity", which postulates that B does not entail E + {\textstyle E^{+}} , does not impose a restriction on H, but forbids any generation of a hypothesis as long as the positive facts are explainable without it. "Weak consistency", which states that no contradiction can be derived from B ∧ H {\textstyle B\land H} , forbids generation of any hypothesis H that contradicts the background knowledge B. Weak consistency is implied by strong consistency; if no negative examples are given, both requirements coincide. Weak consistency is particularly important in the case of noisy data, where completeness and strong consistency cannot be guaranteed. === Learning from interpretations === In learning from interpretations, the positive and negative examples are given as a set of complete or partial Herbrand structures, each of which are themselves a finite set of ground literals. Such a structure e is said to be a model of the set of clauses B ∪ H {\textstyle B\cup H} if for any substitution θ {\textstyle \theta } and any clause h e a d ← b o d y {\textstyle \mathrm {head} \leftarrow \mathrm {body} } in B ∪ H {\textstyle B\cup H} such that b o d y θ ⊆ e {\textstyle \mathrm {body} \theta \subseteq e} , h e a d θ ⊆ e {\displaystyle \mathrm {head} \theta \subseteq e} also holds. The goal is then to output a hypothesis that is complete, meaning every positive example is a model of B ∪ H {\textstyle B\cup H} , and consistent, meaning that no negative example is a model of B ∪ H {\textstyle B\cup H} . == Approaches to ILP == An inductive logic programming system is a program that takes as an input logic theories B , E + , E − {\displaystyle B,E^{+},E^{-}} and outputs a correct hypothesis H with respect to theories B , E + , E − {\displaystyle B,E^{+},E^{-}} . A system is complete if and only if for any input logic theories B , E + , E − {\displaystyle B,E^{+},E^{-}} any correct hypothesis H with respect to these input theories can be found with its hypothesis search procedure. Inductive logic programming systems can be roughly divided into two classes, search-based and meta-interpretative systems. Search-based systems exploit that the space of possible clauses forms a complete lattice under the subsumption relation, where one clause C 1 {\textstyle C_{1}} subsumes another clause C 2 {\textstyle C_{2}} if there is a substitution θ {\textstyle \theta } such that C 1 θ {\textstyle C_{1}\theta } , the result of applying θ {\textstyle \theta } to C 1 {\textstyle C_{1}} , is a subset of C 2 {\textstyle C_{2}} . This lattice can be traversed either bottom-up or top-down. === Bottom-up search === Bottom-up methods to search the subsumption lattice have been investigated since Plotkin's first work on formalising induction in clausal logic in 1970. Techniques used include least general generalisation, based on anti-unification, and inverse resolution, based on inverting the resolution inference rule. ==== Least general generalisation ==== A least general generalisation algorithm takes as input two clauses C 1 {\textstyle C_{1}} and C 2 {\textstyle C_{2}} and outputs the least general generalisation of C 1 {\textstyle C_{1}} and C 2 {\textstyle C_{2}} , that is, a clause C {\textstyle C} that subsumes C 1 {\textstyle C_{1}} and C 2 {\textstyle C_{2}} , and that is subsumed by every other clause that subsumes C 1 {\textstyle C_{1}} and C 2 {\textstyle C_{2}} . The least general generalisation can be computed by first computing all selections from C {\textstyle C} and D {\textstyle D} , which are pairs of literals ( L , M ) ∈ ( C 1 , C 2 ) {\displaystyle (L,M)\in (C_{1},C_{2})} sharing the same predicate symbol and negated/unnegated status. Then, the least general generalisation is obtained as the disjunction of the least general generalisations of the individual selections, which can be obtained by first-order syntactical anti-unification. To account for background knowledge, inductive logic programming systems employ relative least general generalisations, which are defined in terms of subsumption relative to a background theory. In general, such relative least general generalisations are not guaranteed to exist; however, if the background theory B is a finite set of ground literals, then the negation of B is itself a clause. In this case, a relative least general generalisation can be computed by disjoining the negation of B with both C 1 {\textstyle C_{1}} and C 2 {\textstyle C_{2}} and then computing their least general generalisation as before. Relative least general generalisations are the foundation of the bottom-up system Golem. ==== Inverse resolution ==== Inverse resolution is an inductive reasoning technique that involves inverting the resolution operator. Inverse resolution takes information about the resolvent of a resolution step to compute possible resolving clauses. Two types of inverse resolution operator are in use in inductive logic programming: V-operators and W-operators. A V-operator takes clauses R {\textstyle R} and C 1 {\textstyle C_{1}} as input and returns a clause C 2 {\textstyle C_{2}} such that R {\textstyle R} is the resolvent of C 1 {\textstyle C_{1}} and C 2 {\textstyle C_{2}} . A W-operator takes two clauses R 1 {\textstyle R_{1}} and R 2 {\textstyle R_{2}} and returns three clauses C 1 {\textstyle C_{1}} , C 2 {\textstyle C_{2}} and C 3 {\textstyle C_{3}} such that R 1 {\textstyle R_{1}} is the resolvent of C 1 {\textstyle C_{1}} and C 2 {\textstyle C_{2}} and R 2 {\textstyle R_{2}} is the resolvent of C 2 {\textstyle C_{2}} and C 3 {\textstyle C_{3}} . Inverse resolution was first introduced by Stephen Muggleton and Wray Buntine in 1988 for use in the inductive logic programming system Cigol. By 1993, this spawned a surge of research into inverse resolution operators and their properties. === Top-down search === The ILP systems Progol, Hail and Imparo find a hypothesis H using the principle of the inverse entailment for theories B, E, H: B ∧ H ⊨ E ⟺ B ∧ ¬ E ⊨ ¬ H {\displaystyle B\land H\models E\iff B\land \neg E\models \neg H} . First they construct an intermediate theory F called a bridge theory satisfying the conditions B ∧ ¬ E ⊨ F {\displaystyle B\land \neg E\models F} and F ⊨ ¬ H {\displaystyle F\models \neg H} . Then as H ⊨ ¬ F {\displaystyle H\models \neg F} , they generalize the negation of the bridge theory F with anti-entailment. However, the operation of anti-entailment is computationally more expensive since it is highly nondeterministic. Therefore, an alternative hypothesis search can be conducted using the inverse subsumption (anti-subsumption) operation instead, which is less non-deterministic than anti-entailment. Questions of completeness of a hypothesis search procedure of specific inductive logic programming system arise. For example, the Progol hypothesis search procedure based on the inverse entailment inference rule is not complete by Yamamoto's example. On the other hand, Imparo is complete by both anti-entailment procedure and its extended inverse subsumption procedure. === Metainterpretive learning === Rather than explicitly searching the hypothesis graph, metainterpretive or meta-level systems encode the inductive logic programming program as a meta-level logic program which is then solved to obtain an optimal hypothesis. Formalisms used to express the problem specification include Prolog and answer set programming, with existing Prolog systems and answer set solvers used for solving the constraints. And example of a Prolog-based system is Metagol, which is based on a meta-interpreter in Prolog, while ASPAL and ILASP are based on an encoding of the inductive logic programming problem in answer set programming. === Evolutionary learning === Evolutionary algorithms in ILP use a population-based approach to evolve hypotheses, refining them through selection, crossover, and mutation. Methods like EvoLearner have been shown to outperform traditional approaches on structured machine learning benchmarks. == List of implementations == 1BC and 1BC2: first-order naive Bayesian classifiers: ACE (A Combined Engine) Aleph Atom Archived 2014-03-26 at the Wayback Machine Claudien DL-Learner Archived 2019-08-15 at the Wayback Machine DMax FastLAS (Fast Learning from Answer Sets) FOIL (First Order Inductive Learner) Golem ILASP (Inductive Learning of Answer Set Programs) Imparo Inthelex (INcremental THEory Learner from EXamples) Archived 2011-11-28 at the Wayback Machine Lime Metagol Mio MIS (Model Inference System) by Ehud Shapiro Ontolearn Popper PROGOL RSD Warmr (now included in ACE) ProGolem == Probabilistic inductive logic programming == Probabilistic inductive logic programming adapts the setting of inductive logic programming to learning probabilistic logic programs. It can be considered as a form of statistical relational learning within the formalism of probabilistic logic programming. Given background knowledge as a probabilistic logic program B, and a set of positive and negative examples E + {\textstyle E^{+}} and E − {\textstyle E^{-}} the goal of probabilistic inductive logic programming is to find a probabilistic logic program H {\textstyle H} such that the probability of positive examples according to H ∪ B {\textstyle {H\cup B}} is maximized and the probability of negative examples is minimized. This problem has two variants: parameter learning and structure learning. In the former, one is given the structure (the clauses) of H and the goal is to infer the probabilities annotations of the given clauses, while in the latter the goal is to infer both the structure and the probability parameters of H. Just as in classical inductive logic programming, the examples can be given as examples or as (partial) interpretations. === Parameter Learning === Parameter learning for languages following the distribution semantics has been performed by using an expectation-maximisation algorithm or by gradient descent. An expectation-maximisation algorithm consists of a cycle in which the steps of expectation and maximization are repeatedly performed. In the expectation step, the distribution of the hidden variables is computed according to the current values of the probability parameters, while in the maximisation step, the new values of the parameters are computed. Gradient descent methods compute the gradient of the target function and iteratively modify the parameters moving in the direction of the gradient. === Structure Learning === Structure learning was pioneered by Daphne Koller and Avi Pfeffer in 1997, where the authors learn the structure of first-order rules with associated probabilistic uncertainty parameters. Their approach involves generating the underlying graphical model in a preliminary step and then applying expectation-maximisation. In 2008, De Raedt et al. presented an algorithm for performing theory compression on ProbLog programs, where theory compression refers to a process of removing as many clauses as possible from the theory in order to maximize the probability of a given set of positive and negative examples. No new clause can be added to the theory. In the same year, Meert, W. et al. introduced a method for learning parameters and structure of ground probabilistic logic programs by considering the Bayesian networks equivalent to them and applying techniques for learning Bayesian networks. ProbFOIL, introduced by De Raedt and Ingo Thon in 2010, combined the inductive logic programming system FOIL with ProbLog. Logical rules are learned from probabilistic data in the sense that both the examples themselves and their classifications can be probabilistic. The set of rules has to allow one to predict the probability of the examples from their description. In this setting, the parameters (the probability values) are fixed and the structure has to be learned. In 2011, Elena Bellodi and Fabrizio Riguzzi introduced SLIPCASE, which performs a beam search among probabilistic logic programs by iteratively refining probabilistic theories and optimizing the parameters of each theory using expectation-maximisation. Its extension SLIPCOVER, proposed in 2014, uses bottom clauses generated as in Progol to guide the refinement process, thus reducing the number of revisions and exploring the search space more effectively. Moreover, SLIPCOVER separates the search for promising clauses from that of the theory: the space of clauses is explored with a beam search, while the space of theories is searched greedily. == See also == Commonsense reasoning Formal concept analysis Inductive reasoning Inductive programming Inductive probability Statistical relational learning Version space learning == References == This article incorporates text from a free content work. Licensed under CC-BY 4.0 (license statement/permission). Text taken from A History of Probabilistic Inductive Logic Programming​, Fabrizio Riguzzi, Elena Bellodi and Riccardo Zese, Frontiers Media. == Further reading ==
https://en.wikipedia.org/wiki/Inductive_logic_programming
Microsoft 365 is a family of productivity software, collaboration and cloud-based services, encompassing online services, products formerly marketed under Microsoft Office, and enterprise products and services. This list contains all the programs that are, or have been, in Microsoft Office since it was released for classic Mac OS in 1989, and for Windows in 1990. == Current Microsoft 365 applications == == Server applications == == Discontinued programs (includes Microsoft Office programs) == This list includes programs from before Microsoft Office was rebranded as Office 365 (and eventually given its current title of Microsoft 365). == See also == Microsoft Office shared tools List of office suites Comparison of office suites Visual Studio Microsoft Works == References == == External links == The Microsoft 365 page for Windows The Microsoft 365 page for macOS
https://en.wikipedia.org/wiki/List_of_Microsoft_365_applications_and_services
The following is a list of television series that have been broadcast by the American pay television channel Cinemax. Although the large majority of Cinemax's programming consists of feature films, the network has produced and broadcast, either in first-run form or as secondary runs, a limited number of television series over the course of the network's existence. In February 2011, it was announced that Cinemax would begin to offer mainstream original programming to compete with sister channel HBO, and rivals Showtime and Starz; the channel was slated to develop action-oriented original mainstream series aimed at males ages 18–49. The decision was also due in part to competition from other on-demand movie services such as Netflix and iTunes, and to change Cinemax's image from a channel mostly known for its former Max After Dark programming. With the launch of the HBO Max streaming service in 2020, Cinemax's non-adult library of programming shifted to that service throughout 2021, and original programming for the network has all but been depreciated under the ownership of AT&T, then Warner Bros. Discovery, with the desktop "Cinemax Go" service ending on July 31, 2022. == Original programming == === Drama === === Animation === === Co-productions === These shows have been commissioned by Cinemax in cooperation with a partner from another country. == Classic programming == === Sketch comedy === === Max After Dark === == References ==
https://en.wikipedia.org/wiki/List_of_Cinemax_original_programming
In software engineering, profiling (program profiling, software profiling) is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering. Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a profiler (or code profiler). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods. == Gathering program events == Profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, instruction set simulation, operating system hooks, and performance counters. == Use of profilers == Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on new architectures. Software writers need tools to analyze their programs and identify critical sections of code. Compiler writers often use such tools to find out how well their instruction scheduling or branch prediction algorithm is performing... The output of a profiler may be: A statistical summary of the events observed (a profile) Summary profile information is often shown annotated against the source code statements where the events occur, so the size of measurement data is linear to the code size of the program. /* ------------ source------------------------- count */ 0001 IF X = "A" 0055 0002 THEN DO 0003 ADD 1 to XCOUNT 0032 0004 ELSE 0005 IF X = "B" 0055 A stream of recorded events (a trace) For sequential programs, a summary profile is usually sufficient, but performance problems in parallel programs (waiting for messages or synchronization issues) often depend on the time relationship of events, thus requiring a full trace to get an understanding of what is happening. The size of a (full) trace is linear to the program's instruction path length, making it somewhat impractical. A trace may therefore be initiated at one point in a program and terminated at another point to limit the output. An ongoing interaction with the hypervisor (continuous or periodic monitoring via on-screen display for instance) This provides the opportunity to switch a trace on or off at any desired point during execution in addition to viewing on-going metrics about the (still executing) program. It also provides the opportunity to suspend asynchronous processes at critical points to examine interactions with other parallel processes in more detail. A profiler can be applied to an individual method or at the scale of a module or program, to identify performance bottlenecks by making long-running code obvious. A profiler can be used to understand code from a timing point of view, with the objective of optimizing it to handle various runtime conditions or various loads. Profiling results can be ingested by a compiler that provides profile-guided optimization. Profiling results can be used to guide the design and optimization of an individual algorithm; the Krauss matching wildcards algorithm is an example. Profilers are built into some application performance management systems that aggregate profiling data to provide insight into transaction workloads in distributed applications. == History == Performance-analysis tools existed on IBM/360 and IBM/370 platforms from the early 1970s, usually based on timer interrupts which recorded the program status word (PSW) at set timer-intervals to detect "hot spots" in executing code. This was an early example of sampling (see below). In early 1974 instruction-set simulators permitted full trace and other performance-monitoring features. Profiler-driven program analysis on Unix dates back to 1973, when Unix systems included a basic tool, prof, which listed each function and how much of program execution time it used. In 1982 gprof extended the concept to a complete call graph analysis. In 1994, Amitabh Srivastava and Alan Eustace of Digital Equipment Corporation published a paper describing ATOM (Analysis Tools with OM). The ATOM platform converts a program into its own profiler: at compile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data. This technique - modifying a program to analyze itself - is known as "instrumentation". In 2004 both the gprof and ATOM papers appeared on the list of the 50 most influential PLDI papers for the 20-year period ending in 1999. == Profiler types based on output == === Flat profiler === Flat profilers compute the average call times, from the calls, and do not break down the call times based on the callee or the context. === Call-graph profiler === Call graph profilers show the call times, and frequencies of the functions, and also the call-chains involved based on the callee. In some tools full context is not preserved. === Input-sensitive profiler === Input-sensitive profilers add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values. They generate charts that characterize how an application's performance scales as a function of its input. == Data granularity in profiler types == Profilers, which are also programs themselves, analyze target programs by collecting information on the target program's execution. Based on their data granularity, which depends upon how profilers collect information, they are classified as event-based or statistical profilers. Profilers interrupt program execution to collect information. Those interrupts can limit time measurement resolution, which implies that timing results should be taken with a grain of salt. Basic block profilers report a number of machine clock cycles devoted to executing each line of code, or timing based on adding those together; the timings reported per basic block may not reflect a difference between cache hits and misses. === Event-based profilers === Event-based profilers are available for the following programming languages: Java: the JVMTI (JVM Tools Interface) API, formerly JVMPI (JVM Profiling Interface), provides hooks to profilers, for trapping events like calls, class-load, unload, thread enter leave. .NET: Can attach a profiling agent as a COM server to the CLR using Profiling API. Like Java, the runtime then provides various callbacks into the agent, for trapping events like method JIT / enter / leave, object creation, etc. Particularly powerful in that the profiling agent can rewrite the target application's bytecode in arbitrary ways. Python: Python profiling includes the profile module, hotshot (which is call-graph based), and using the 'sys.setprofile' function to trap events like c_{call,return,exception}, python_{call,return,exception}. Ruby: Ruby also uses a similar interface to Python for profiling. Flat-profiler in profile.rb, module, and ruby-prof a C-extension are present. === Statistical profilers === These profilers operate by sampling. A sampling profiler probes the target program's call stack at regular intervals using operating system interrupts. Sampling profiles are typically less numerically accurate and specific, providing only a statistical approximation, but allow the target program to run at near full speed. "The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods." In practice, sampling profilers can often provide a more accurate picture of the target program's execution than other approaches, as they are not as intrusive to the target program and thus don't have as many side effects (such as on memory caches or instruction decoding pipelines). Also since they don't affect the execution speed as much, they can detect issues that would otherwise be hidden. They are also relatively immune to over-evaluating the cost of small, frequently called routines or 'tight' loops. They can show the relative amount of time spent in user mode versus interruptible kernel mode such as system call processing. Unfortunately, running kernel code to handle the interrupts incurs a minor loss of CPU cycles from the target program, diverts cache usage, and cannot distinguish the various tasks occurring in uninterruptible kernel code (microsecond-range activity) from user code. Dedicated hardware can do better: ARM Cortex-M3 and some recent MIPS processors' JTAG interfaces have a PCSAMPLE register, which samples the program counter in a truly undetectable manner, allowing non-intrusive collection of a flat profile. Some commonly used statistical profilers for Java/managed code are SmartBear Software's AQtime and Microsoft's CLR Profiler. Those profilers also support native code profiling, along with Apple Inc.'s Shark (OSX), OProfile (Linux), Intel VTune and Parallel Amplifier (part of Intel Parallel Studio), and Oracle Performance Analyzer, among others. === Instrumentation === This technique effectively adds instructions to the target program to collect the required information. Note that instrumenting a program can cause performance changes, and may in some cases lead to inaccurate results and/or heisenbugs. The effect will depend on what information is being collected, on the level of timing details reported, and on whether basic block profiling is used in conjunction with instrumentation. For example, adding code to count every procedure/routine call will probably have less effect than counting how many times each statement is obeyed. A few computers have special hardware to collect information; in this case the impact on the program is minimal. Instrumentation is key to determining the level of control and amount of time resolution available to the profilers. Manual: Performed by the programmer, e.g. by adding instructions to explicitly calculate runtimes, simply count events or calls to measurement APIs such as the Application Response Measurement standard. Automatic source level: instrumentation added to the source code by an automatic tool according to an instrumentation policy. Intermediate language: instrumentation added to assembly or decompiled bytecodes giving support for multiple higher-level source languages and avoiding (non-symbolic) binary offset re-writing issues. Compiler assisted Binary translation: The tool adds instrumentation to a compiled executable. Runtime instrumentation: Directly before execution the code is instrumented. The program run is fully supervised and controlled by the tool. Runtime injection: More lightweight than runtime instrumentation. Code is modified at runtime to have jumps to helper functions. === Interpreter instrumentation === Interpreter debug options can enable the collection of performance metrics as the interpreter encounters each target statement. A bytecode, control table or JIT interpreters are three examples that usually have complete control over execution of the target code, thus enabling extremely comprehensive data collection opportunities. === Hypervisor/simulator === Hypervisor: Data are collected by running the (usually) unmodified program under a hypervisor. Example: SIMMON Simulator and Hypervisor: Data collected interactively and selectively by running the unmodified program under an instruction set simulator. == See also == == References == == External links == Article "Need for speed — Eliminating performance bottlenecks" on doing execution time analysis of Java applications using IBM Rational Application Developer. Profiling Runtime Generated and Interpreted Code using the VTune Performance Analyzer
https://en.wikipedia.org/wiki/Profiling_(computer_programming)
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process. The multiple threads of a given process may be executed concurrently (via multithreading capabilities), sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time. The implementation of threads and processes differs between operating systems. == History == Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of the OS/360 control system, of which Multiprogramming with a Variable Number of Tasks (MVT) was one. Saltzer (1966) credits Victor A. Vyssotsky with the term "thread". The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores. == Related concepts == Scheduling can be done at the kernel level or user level, and multitasking can be done preemptively or cooperatively. This yields a variety of related concepts. === Processes === At the kernel level, a process contains one or more kernel threads, which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as a runtime system can itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes, while if they share data they are usually called (user) threads, particularly if preemptively scheduled. Cooperatively scheduled user threads are known as fibers; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads. A process is a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes own resources allocated by the operating system. Resources include memory (for both code and data), file handles, sockets, device handles, windows, and a process control block. Processes are isolated by process isolation, and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – see interprocess communication. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost of context switching, due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untagged translation lookaside buffer (TLB), notably on x86). === Kernel threads === A kernel thread is a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's process scheduler is preemptive. Kernel threads do not own resources except for a stack, a copy of the registers including the program counter, and thread-local storage (if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped. === User threads === Threads are sometimes implemented in userspace libraries, thus called user threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit from multi-processor machines (M:N model). User threads as implemented by virtual machines are also called green threads. As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload. However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing. A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/await primitives). === Fibers === Fibers are an even lighter unit of scheduling which are cooperatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation much easier than kernel or user threads. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of the OpenMP parallel programming model implement their tasks through fibers. Closely related to fibers are coroutines, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct. === Threads vs processes === Threads differ from traditional multitasking operating-system processes in several ways: processes are typically independent, while threads exist as subsets of a process processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources processes have separate address spaces, whereas threads share their address space processes interact only through system-provided inter-process communication mechanisms context switching between threads in the same process typically occurs faster than context switching between processes Systems such as Windows NT and OS/2 are said to have cheap threads and expensive processes; in other operating systems there is not so great a difference except in the cost of an address-space switch, which on some architectures (notably x86) results in a translation lookaside buffer (TLB) flush. Advantages and disadvantages of threads vs processes include: Lower resource consumption of threads: using threads, an application can operate using fewer resources than it would need when using multiple processes. Simplified sharing and communication of threads: unlike processes, which require a message passing or shared memory mechanism to perform inter-process communication (IPC), threads can communicate through data, code and files they already share. Thread crashes a process: due to threads sharing the same address space, an illegal operation performed by a thread can crash the entire process; therefore, one misbehaving thread can disrupt the processing of all the other threads in the application. == Scheduling == === Preemptive vs cooperative scheduling === Operating systems schedule threads either preemptively or cooperatively. Multi-user operating systems generally favor preemptive multithreading for its finer-grained control over execution time via context switching. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causing lock convoy, priority inversion, or other side-effects. In contrast, cooperative multithreading relies on threads to relinquish control of execution, thus ensuring that threads run to completion. This can cause problems if a cooperatively multitasked thread blocks by waiting on a resource or if it starves other threads by not yielding control of execution during intensive computation. === Single- vs multi-processor systems === Until the early 2000s, most desktop computers had only one single-core CPU, with no support for hardware threads, although threads were still used on such computers because switching between threads was generally still quicker than full-process context switches. In 2002, Intel added support for simultaneous multithreading to the Pentium 4 processor, under the name hyper-threading; in 2005, they introduced the dual-core Pentium D processor and AMD introduced the dual-core Athlon 64 X2 processor. Systems with a single processor generally implement multithreading by time slicing: the central processing unit (CPU) switches between different software threads. This context switching usually occurs frequently enough that users perceive the threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On a multiprocessor or multi-core system, multiple threads can execute in parallel, with every processor or core executing a separate thread simultaneously; on a processor or core with hardware threads, separate software threads can also be executed concurrently by separate hardware threads. === Threading models === ==== 1:1 (kernel-level threading) ==== Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel are the simplest possible threading implementation. OS/2 and Win32 used this approach from the start, while on Linux the GNU C Library implements this approach (via the NPTL or older LinuxThreads). This approach is also used by Solaris, NetBSD, FreeBSD, macOS, and iOS. ==== M:1 (user-level threading) ==== An M:1 model implies that all application-level threads map to one kernel-level scheduled entity; the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration on multithreaded processors or multi-processor computers: there is never more than one thread being scheduled at the same time. For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. The GNU Portable Threads uses User-level threading, as does State Threads. ==== M:N (hybrid threading) ==== M:N maps some M number of application threads onto some N number of kernel entities, or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler. ==== Hybrid implementation examples ==== Scheduler activations used by older versions of the NetBSD native POSIX threads library implementation (an M:N model as opposed to a 1:1 kernel or userspace implementation model) Light-weight processes used by older versions of the Solaris operating system Marcel from the PM2 project. The OS for the Tera-Cray MTA-2 The Glasgow Haskell Compiler (GHC) for the language Haskell uses lightweight threads which are scheduled on operating system threads. ==== History of threading models in Unix systems ==== SunOS 4.x implemented light-weight processes or LWPs. NetBSD 2.x+, and DragonFly BSD implement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to a 1:1 model. FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model. == Single-threaded vs multithreaded programs == In computer programming, single-threading is the processing of one instruction at a time. In the formal analysis of the variables' semantics and process state, the term single threading can be used differently to mean "backtracking within a single thread", which is common in the functional programming community. Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enable parallel execution on a multiprocessing system. Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions. === Threads and data synchronization === Threads in the same process share the same address space. This allows concurrently running code to couple tightly and conveniently exchange data without the overhead or complexity of an IPC. When shared between threads, however, even simple data structures become prone to race conditions if they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate. To prevent this, threading application programming interfaces (APIs) offer synchronization primitives such as mutexes to lock data structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in a spinlock. Both of these may sap performance and force processors in symmetric multiprocessing (SMP) systems to contend for the memory bus, especially if the granularity of the locking is too fine. Other synchronization APIs include condition variables, critical sections, semaphores, and monitors. === Thread pools === A popular programming pattern involving threads is that of thread pools where a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management. === Multithreaded programs vs single-threaded programs pros and cons === Multithreaded applications have the following advantages vs single-threaded ones: Responsiveness: multithreading can allow an application to remain responsive to input. In a one-thread program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a worker thread that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background. On the other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking I/O and/or Unix signals being available for obtaining similar results. Parallelization: applications looking to use multicore or multi-CPU systems can use multithreading to split data and tasks into parallel subtasks and let the underlying architecture manage how the threads run, either concurrently on one core or in parallel on multiple cores. GPU computing environments like CUDA and OpenCL use the multithreading model where dozens to hundreds of threads run in parallel across data on a large number of cores. This, in turn, enables better system utilization, and (provided that synchronization costs don't eat the benefits up), can provide faster program execution. Multithreaded applications have the following drawbacks: Synchronization complexity and related bugs: when using shared resources typical for threaded programs, the programmer must be careful to avoid race conditions and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using mutexes) to prevent common data from being read or overwritten in one thread while being modified by another. Careless use of such primitives can lead to deadlocks, livelocks or races over resources. As Edward A. Lee has written: "Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly non-deterministic, and the job of the programmer becomes one of pruning that nondeterminism." Being untestable. In general, multithreaded programs are non-deterministic, and as a result, are untestable. In other words, a multithreaded program can easily have bugs which never manifest on a test system, manifesting only in production. This can be alleviated by restricting inter-thread communications to certain well-defined patterns (such as message-passing). Synchronization costs. As thread context switch on modern CPUs can cost up to 1 million CPU cycles, it makes writing efficient multithreading programs difficult. In particular, special attention has to be paid to avoid inter-thread synchronization from being too frequent. == Programming language support == Many programming languages support threading in some capacity. IBM PL/I(F) included support for multithreading (called multitasking) as early as in the late 1960s, and this was continued in the Optimizing Compiler and later versions. The IBM Enterprise PL/I compiler introduced a new model "thread" API. Neither version was part of the PL/I standard. Many implementations of C and C++ support threading, and provide access to the native threading APIs of the operating system. A standardized interface for thread implementation is POSIX Threads (Pthreads), which is a set of C-function library calls. OS vendors are free to implement the interface as desired, but the application developer should be able to use the same interface across multiple platforms. Most Unix platforms, including Linux, support Pthreads. Microsoft Windows has its own set of thread functions in the process.h interface for multithreading, like beginthread. Some higher level (and usually cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages and language extensions also try to abstract the concept of concurrency and threading from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA). A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python) which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the application's code on two or more threads at once. This effectively limits the parallelism on multiple core systems. It also limits performance for processor-bound threads (which require the processor), but doesn't effect I/O-bound or network-bound ones as much. Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly "shared" between threads. In Tcl each thread has one or more interpreters. In programming models such as CUDA designed for data parallel computation, an array of threads run the same code in parallel using only its ID to find its data in memory. In essence, the application must be designed so that each thread performs the same operation on different segments of memory so that they can operate in parallel and use the GPU architecture. Hardware description languages such as Verilog have a different threading model that supports extremely large numbers of threads (for modeling hardware). == See also == == References == == Further reading ==
https://en.wikipedia.org/wiki/Thread_(computing)
Zee Tamil is an Indian Tamil general entertainment private broadcast television network owned by Zee Entertainment Enterprises. This is a list of original programs that have been broadcast on Zee Tamil. == Current programming == === Drama series === === Reality shows === == Former programming == === Drama series === === Soap operas === Nenjathai Killadhe (2014) Nenjathai Killadhey === Dubbed series === Chinna Poove Mella Pesu Chinna Marumagal CID Crime Patrol Fear Files Iniya Iru Malargal Jodha Akbar Kaadhalukku Salaam Kaatrukkenna Veli Mahabharatham Mapillai Marumanam Naagarani Naanum Oru Penn Puratchiyalar Dr. Ambedkar Ramayanam Sivanum Naanum Thamarai Thenali Raaman Veera Marthandan Veera Shivaji Vishnu Puranam === Reality and non-scripted shows === Dance Jodi Dance Dance Jodi Dance season 2 Dance Jodi Dance Reloaded Dance Jodi Dance Juniors Dancing Khilladies Genes Genes season 2 Genes season 3 Junior Senior Junior Super Stars (seasons 1, 2 and 3) Mahanadigai Mr & Mrs Khiladis (seasons 1 and 2) Nanben Da Run Baby Run Sa Re Ga Ma Pa Challenge Tamil 2009 Sa Re Ga Ma Pa Seniors Sa Re Ga Ma Pa Seniors season 2 Sa Re Ga Ma Pa Seniors season 3 Sa Re Ga Ma Pa Seniors season 4 Sa Re Ga Ma Pa Lil Champs (seasons 1 and 2) Sa Re Ga Ma Pa Tamil Li'l Champs season 3 Simply Kushboo Solvathellam Unmai (seasons 1 and 2) Sundays with Anil and Karky Super Jodi Super Mom (seasons 1, 2 and 3) Survivor Survivor season 1 Weekend with Stars Why This Kolaveri Zee Super Family == References ==
https://en.wikipedia.org/wiki/List_of_programmes_broadcast_by_Zee_Tamil_(India)
The Advanced Boolean Expression Language (ABEL) is an obsolete hardware description language (HDL) and an associated set of design tools for programming programmable logic devices (PLDs). It was created in 1983 by Data I/O Corporation, in Redmond, Washington. ABEL includes both concurrent equation and truth table logic formats as well as a sequential state machine description format. A preprocessor with syntax loosely based on Digital Equipment Corporation's MACRO-11 assembly language is also included. In addition to being used for describing digital logic, ABEL may also be used to describe test vectors (patterns of inputs and expected outputs) that may be downloaded to a hardware PLD programmer along with the compiled and fuse-mapped PLD programming data. Other PLD design languages originating in the same era include CUPL and PALASM. Since the advent of larger field-programmable gate arrays (FPGAs), PLD-specific HDLs have fallen out of favor as standard HDLs such as Verilog and VHDL gained adoption. The ABEL concept and original compiler were created by Russell de Pina of Data I/O's Applied Research Group in 1981. The work was continued by ABEL product development team (led by Dr. Kyu Y. Lee) and included Mary Bailey, Bjorn Benson, Walter Bright, Michael Holley, Charles Olivier, and David Pellerin. After a series of acquisitions, the ABEL toolchain and intellectual property were bought by Xilinx. Xilinx discontinued support for ABEL in its ISE Design Suite starting with version 11 (released in 2010). == References == == External links == University of Pennsylvania's ABEL primer, as recommended by Walter Bright. Dead Link University of Southern Maine ABEL-HDL Primer, by J. Van der Spiegel Prentice Hall Publishers Digital Design Using ABEL, 1994, by David Pellerin and Michael Holley Prentice Hall Publishers Practical Design Using Programmable Logic, 1991, by David Pellerin and Michael Holley
https://en.wikipedia.org/wiki/Advanced_Boolean_Expression_Language
Sigreturn-oriented programming (SROP) is a computer security exploit technique that allows an attacker to execute code in presence of security measures such as non-executable memory and code signing. It was presented for the first time at the 35th IEEE Symposium on Security and Privacy in 2014 where it won the best student paper award. This technique employs the same basic assumptions behind the return-oriented programming (ROP) technique: an attacker controlling the call stack, for example through a stack buffer overflow, is able to influence the control flow of the program through simple instruction sequences called gadgets. The attack works by pushing a forged sigcontext structure on the call stack, overwriting the original return address with the location of a gadget that allows the attacker to call the sigreturn system call. Often just a single gadget is needed to successfully put this attack into effect. This gadget may reside at a fixed location, making this attack simple and effective, with a setup generally simpler and more portable than the one needed by the plain return-oriented programming technique. Sigreturn-oriented programming can be considered a weird machine since it allows code execution outside the original specification of the program. == Background == Sigreturn-oriented programming (SROP) is a technique similar to return-oriented programming (ROP), since it employs code reuse to execute code outside the scope of the original control flow. In this sense, the adversary needs to be able to carry out a stack smashing attack, usually through a stack buffer overflow, to overwrite the return address contained inside the call stack. === Stack hopping exploits === If mechanisms such as data execution prevention are employed, it won't be possible for the attacker to just place a shellcode on the stack and cause the machine to execute it by overwriting the return address. With such protections in place, the machine won't execute any code present in memory areas marked as writable and non-executable. Therefore, the attacker will need to reuse code already present in memory. Most programs do not contain functions that will allow the attacker to directly carry out the desired action (e.g., obtain access to a shell), but the necessary instructions are often scattered around memory. Return-oriented programming requires these sequences of instructions, called gadgets, to end with a RET instruction. In this way, the attacker can write a sequence of addresses for these gadgets to the stack, and as soon as a RET instruction in one gadget is executed, the control flow will proceed to the next gadget in the list. === Signal handler mechanism === This attack is made possible by how signals are handled in most POSIX-like systems. Whenever a signal is delivered, the kernel needs to context switch to the installed signal handler. To do so, the kernel saves the current execution context in a frame on the stack. The structure pushed onto the stack is an architecture-specific variant of the sigcontext structure, which holds various data comprising the contents of the registers at the moment of the context switch. When the execution of the signal handler is completed, the sigreturn() system call is called. Calling the sigreturn syscall means being able to easily set the contents of registers using a single gadget that can be easily found on most systems. === Differences from ROP === There are several factors that characterize an SROP exploit and distinguish it from a classical return-oriented programming exploit. First, ROP is dependent on available gadgets, which can be very different in distinct binaries, thus making chains of gadget non-portable. Address space layout randomization (ASLR) makes it hard to use gadgets without an information leakage to get their exact positions in memory. Although Turing-complete ROP compilers exist, it is usually non-trivial to create a ROP chain. SROP exploits are usually portable across different binaries with minimal or no effort and allow easily setting the contents of the registers, which could be non-trivial or unfeasible for ROP exploits if the needed gadgets are not present. Moreover, SROP requires a minimal number of gadgets and allows constructing effective shellcodes by chaining system calls. These gadgets are always present in memory, and in some cases are always at fixed locations: == Attacks == === Linux === An example of the kind of gadget needed for SROP exploits can always be found in the virtual dynamic shared object (VDSO) memory area on x86-Linux systems: On some Linux kernel versions, ASLR can be disabled by setting the limit for the stack size to unlimited, effectively bypassing ASLR and allowing easy access to the gadget present in a VDSO. For Linux kernels prior to version 3.3, it is also possible to find a suitable gadget inside the vsyscall page, which is a mechanism to accelerate the access to certain system calls often used by legacy programs and resides always at a fixed location. === Turing-completeness === It is possible to use gadgets to write into the contents of the stack frames, thereby constructing a self-modifying program. Using this technique, it is possible to devise a simple virtual machine, which can be used as the compilation target for a Turing-complete language. An example of such an approach can be found in Bosman's paper, which demonstrates the construction of an interpreter for a language similar to the Brainfuck programming language. The language provides a program counter PC, a memory pointer P, and a temporary register used for 8-bit addition A. This means that complex backdoors or obfuscated attacks can also be devised. == Defenses and mitigations == A number of techniques exists to mitigate SROP attacks, relying on address space layout randomization, canaries and cookies, or shadow stacks. === Address space layout randomization === Address space layout randomization makes it harder to use suitable gadgets by making their locations unpredictable. === Signal cookies === A mitigation for SROP called signal cookies has been proposed. It consists of a way of verifying that the sigcontext structure has not been tampered with by the means of a random cookie XORed with the address of the stack location where it is to be stored. In this way, the sigreturn syscall just needs to verify the cookie's existence at the expected location, effectively mitigating SROP with a minimal impact on performances. === Vsyscall emulation === In Linux kernel versions greater than 3.3, the vsyscall interface is emulated, and any attempt to directly execute gadgets in the page will result in an exception. === RAP === Grsecurity is a set of patches for the Linux kernel to harden and improve system security. It includes the so-called return-address protection (RAP) to help protect against code reuse attacks. === CET === Starting in 2016, Intel is developing a Control-flow Enforcement Technology (CET) to help mitigate and prevent stack-hopping exploits. CET works by implementing a shadow stack in RAM which will only contain return addresses, protected by the CPU's memory management unit. == See also == Linux kernel interfaces Vulnerability (computing) Exploit (computer security) Buffer overflow Address space layout randomization Executable space protection NX bit == References == == External links == OHM 2013: Review of “Returning signals for fun and profit Playing around with SROP Fun with SROP Exploitation binjitsu - Sigreturn Oriented Programming SigReturn Oriented Programming on x86-64 linux Sigreturn ROP exploitation technique (signal's stack frame for the win)
https://en.wikipedia.org/wiki/Sigreturn-oriented_programming
A programmable logic controller (PLC) or programmable controller is an industrial computer that has been ruggedized and adapted for the control of manufacturing processes, such as assembly lines, machines, robotic devices, or any activity that requires high reliability, ease of programming, and process fault diagnosis. PLCs can range from small modular devices with tens of inputs and outputs (I/O), in a housing integral with the processor, to large rack-mounted modular devices with thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for many arrangements of digital and analog I/O, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. PLCs were first developed in the automobile manufacturing industry to provide flexible, rugged and easily programmable controllers to replace hard-wired relay logic systems. Dick Morley, who invented the first PLC, the Modicon 084, for General Motors in 1968, is considered the father of PLC. A PLC is an example of a hard real-time system since output results must be produced in response to input conditions within a limited time, otherwise unintended operation may result. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory. == Invention and early development == The PLC originated in the late 1960s in the automotive industry in the US and was designed to replace relay logic systems. Before, control logic for manufacturing was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. The hard-wired nature of these components made it difficult for design engineers to alter the automation process. Changes would require rewiring and careful updating of the documentation. Troubleshooting was a tedious process. When general-purpose computers became available, they were soon applied to control logic in industrial processes. These early computers were unreliable and required specialist programmers and strict control of working conditions, such as temperature, cleanliness, and power quality. The PLC provided several advantages over earlier automation systems. It was designed to tolerate the industrial environment better than systems intended for office use, and was more reliable, compact, and required less maintenance than relay systems. It was easily expandable with additional I/O modules. While relay systems required tedious and sometimes complicated hardware changes in case of reconfiguration, a PLC can be reconfigured by loading new or modified code. This allowed for easier iteration over manufacturing process design. With a simple programming language focused on logic and switching operations, it was more user-friendly than computers using general-purpose programming languages. Early PLCs were programmed in ladder logic, which strongly resembled a schematic diagram of relay logic. It also permitted its operation to be monitored. === Modicon === In 1968, GM Hydramatic, the automatic transmission division of General Motors, issued a request for proposals for an electronic replacement for hard-wired relay systems based on a white paper written by engineer Edward R. Clark. The winning proposal came from Bedford Associates from Bedford, Massachusetts. The result, built in 1969, was the first PLC and designated the 084, because it was Bedford Associates' eighty-fourth project. Bedford Associates started a company dedicated to developing, manufacturing, selling, and servicing this new product, which they named Modicon (standing for modular digital controller). One of the people who worked on that project was Dick Morley, who is considered to be the father of the PLC. The Modicon brand was sold in 1977 to Gould Electronics and later to Schneider Electric, its current owner. About this same time, Modicon created Modbus, a data communications protocol used with its PLCs. Modbus has since become a standard open protocol commonly used to connect many industrial electrical devices. One of the first 084 models built is now on display at Schneider Electric's facility in North Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until after the 984 made its appearance. === Allen-Bradley === In a parallel development, Odo Josef Struger is sometimes known as the "father of the programmable logic controller" as well. He was involved in the invention of the Allen-Bradley programmable logic controller and is credited with coining the PLC acronym. Allen-Bradley (now a brand owned by Rockwell Automation) became a major PLC manufacturer in the United States during his tenure. Struger played a leadership role in developing IEC 61131-3 PLC programming language standards. === Early methods of programming === Many early PLC programming applications were not capable of graphical representation of the logic, and so it was instead represented as a series of logic expressions in some kind of Boolean format, similar to Boolean algebra. As programming terminals evolved, because ladder logic was a familiar format used for electro-mechanical control panels, it became more commonly used. Newer formats, such as state logic, function block diagrams, and structured text exist. Ladder logic remains popular because PLCs solve the logic in a predictable and repeating sequence, and ladder logic allows the person writing the logic to see any issues with the timing of the logic sequence more easily than would be possible in other formats. Up to the mid-1990s, PLCs were programmed using proprietary programming panels or special-purpose programming terminals, which often had dedicated function keys representing the various logical elements of PLC programs. Some proprietary programming terminals displayed the elements of PLC programs as graphic symbols, but plain ASCII character representations of contacts, coils, and wires were common. Programs were stored on cassette tape cartridges. Facilities for printing and documentation were minimal due to a lack of memory capacity. The oldest PLCs used magnetic-core memory. == Architecture == A PLC is an industrial microprocessor-based controller with programmable memory used to store program instructions and various functions. It consists of: A processor unit (CPU) which interprets inputs, executes the control program stored in memory and sends output signals, A power supply unit which converts AC voltage to DC, A memory unit storing data from inputs and program to be executed by the processor, An input and output interface, where the controller receives and sends data from and to external devices, A communications interface to receive and transmit data on communication networks from and to remote PLCs. PLCs require a programming device which is used to develop and later download the created program into the memory of the controller. Modern PLCs generally contain a real-time operating system, such as OS-9 or VxWorks. === Mechanical design === There are two types of mechanical design for PLC systems. A single box (also called a brick) is a small programmable controller that fits all units and interfaces into one compact casing, although, typically, additional expansion modules for inputs and outputs are available. The second design type – a modular PLC – has a chassis (also called a rack) that provides space for modules with different functions, such as power supply, processor, selection of I/O modules and communication interfaces – which all can be customized for the particular application. Several racks can be administered by a single processor and may have thousands of inputs and outputs. Either a special high-speed serial I/O link or comparable communication method is used so that racks can be distributed away from the processor, reducing the wiring costs for large plants. === Discrete and analog signals === Discrete (digital) signals can only take on or off value (1 or 0, true or false). Examples of devices providing a discrete signal include limit switches and photoelectric sensors. Analog signals can use voltage or current that is analogous to the monitored variable and can take any value within their scale. Pressure, temperature, flow, and weight are often represented by analog signals. These are typically interpreted as integer values with various ranges of accuracy depending on the device and the number of bits available to store the data. For example, an analog 0 to 10 V or 4-20 mA current loop input would be converted into an integer value of 0 to 32,767. The PLC will take this value and translate it into the desired units of the process so the operator or program can read it. === Redundancy === Some special processes need to work permanently with minimum unwanted downtime. Therefore, it is necessary to design a system that is fault tolerant. In such cases, to increase the system availability in the event of hardware component failure, redundant CPU or I/O modules with the same functionality can be added to a hardware configuration to prevent a total or partial process shutdown due to hardware failure. Other redundancy scenarios could be related to safety-critical processes, for example, large hydraulic presses could require that two PLCs turn on output before the press can come down in case one PLC does not behave properly. == Programming == Programmable logic controllers are intended to be used by engineers without a programming background. For this reason, a graphical programming language called ladder logic was first developed. It resembles the schematic diagram of a system built with electromechanical relays and was adopted by many manufacturers and later standardized in the IEC 61131-3 control systems programming standard. As of 2015, it is still widely used, thanks to its simplicity. As of 2015, the majority of PLC systems adhere to the IEC 61131-3 standard that defines 2 textual programming languages: Structured Text (similar to Pascal) and Instruction List; as well as 3 graphical languages: ladder logic, function block diagram and sequential function chart. Instruction List was deprecated in the third edition of the standard. Modern PLCs can be programmed in a variety of ways, from the relay-derived ladder logic to programming languages such as specially adapted dialects of BASIC and C. While the fundamental concepts of PLC programming are common to all manufacturers, differences in I/O addressing, memory organization, and instruction sets mean that PLC programs are never perfectly interchangeable between different makers. Even within the same product line of a single manufacturer, different models may not be directly compatible. === Programming device === Manufacturers develop programming software for their PLCs. In addition to being able to program PLCs in multiple languages, they provide common features like hardware diagnostics and maintenance, software debugging, and offline simulation. PLC programs are typically written in a programming device, which can take the form of a desktop console, special software on a personal computer, or a handheld device. The program is then downloaded to the PLC through a cable connection or over a network. It is stored either in non-volatile flash memory or battery-backed-up RAM on the PLC. In some PLCs, the program is transferred from the programming device using a programming board that writes the program into a removable chip, such as EPROM that is then inserted into the PLC. === Simulation === An incorrectly programmed PLC can result in lost productivity and dangerous conditions for programmed equipment. PLC simulation is a feature often found in PLC programming software. It allows for testing and debugging early in a project's development. Testing the project in simulation improves its quality, increases the level of safety associated with equipment and can save time during the installation and commissioning of automated control applications since many scenarios can be tried and tested before the system is activated. == Functionality == The main difference compared to most other computing devices is that PLCs are intended for and therefore tolerant of more severe environmental conditions (such as dust, moisture, heat, cold), while offering extensive input/output (I/O) to connect the PLC to sensors and actuators. PLC input can include simple digital elements such as limit switches, analog variables from process sensors (such as temperature and pressure), and more complex data such as that from positioning or machine vision systems. PLC output can include elements such as indicator lamps, sirens, electric motors, pneumatic or hydraulic cylinders, magnetic relays, solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a fieldbus or computer network that plugs into the PLC. The functionality of the PLC has evolved over the years to include sequential relay control, motion control, process control, distributed control systems, and networking. The data handling, storage, processing power, and communication capabilities of some modern PLCs are approximately equivalent to desktop computers. PLC-like programming combined with remote I/O hardware, allows a general-purpose desktop computer to overlap some PLCs in certain applications. Desktop computer controllers have not been generally accepted in heavy industry because desktop computers run on less stable operating systems than PLCs, and because the desktop computer hardware is typically not designed to the same levels of tolerance to temperature, humidity, vibration, and longevity as the processors used in PLCs. Operating systems such as Windows do not lend themselves to deterministic logic execution, with the result that the controller may not always respond to changes of input status with the consistency in timing expected from PLCs. Desktop logic applications find use in less critical situations, such as laboratory automation and use in small facilities where the application is less demanding and critical. === Basic functions === The most basic function of a programmable logic controller is to emulate the functions of electromechanical relays. Discrete inputs are given a unique address, and a PLC instruction can test if the input state is on or off. Just as a series of relay contacts perform a logical AND function, not allowing current to pass unless all the contacts are closed, so a series of "examine if on" instructions will energize its output storage bit if all the input bits are on. Similarly, a parallel set of instructions will perform a logical OR. In an electromechanical relay wiring diagram, a group of contacts controlling one coil is called a "rung" of a "ladder diagram", and this concept is also used to describe PLC logic. Some models of PLC limit the number of series and parallel instructions in one "rung" of logic. The output of each rung sets or clears a storage bit, which may be associated with a physical output address or which may be an "internal coil" with no physical connection. Such internal coils can be used, for example, as a common element in multiple separate rungs. Unlike physical relays, there is usually no limit to the number of times an input, output or internal coil can be referenced in a PLC program. Some PLCs enforce a strict left-to-right, top-to-bottom execution order for evaluating the rung logic. This is different from electro-mechanical relay contacts, which, in a sufficiently complex circuit, may either pass current left-to-right or right-to-left, depending on the configuration of surrounding contacts. The elimination of these "sneak paths" is either a bug or a feature, depending on the programming style. More advanced instructions of the PLC may be implemented as functional blocks, which carry out some operation when enabled by a logical input and which produce outputs to signal, for example, completion or errors, while manipulating variables internally that may not correspond to discrete logic. === Communication === PLCs use built-in ports, such as USB, Ethernet, RS-232, RS-485, or RS-422 to communicate with external devices (sensors, actuators) and systems (programming software, SCADA, user interface). Communication is carried over various industrial network protocols, like Modbus, or EtherNet/IP. Many of these protocols are vendor specific. PLCs used in larger I/O systems may have peer-to-peer (P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for user interface devices such as keypads or PC-type workstations. Formerly, some manufacturers offered dedicated communication modules as an add-on function where the processor had no network connection built-in. === User interface === PLCs may need to interact with people for the purpose of configuration, alarm reporting, or everyday control. A human-machine interface (HMI) is employed for this purpose. HMIs are also referred to as man-machine interfaces (MMIs) and graphical user interfaces (GUIs). A simple system may use buttons and lights to interact with the user. Text displays are available as well as graphical touch screens. More complex systems use programming and monitoring software installed on a computer, with the PLC connected via a communication interface. == Process of a scan cycle == A PLC works in a program scan cycle, where it executes its program repeatedly. The simplest scan cycle consists of 3 steps: Read inputs. Execute the program. Write outputs. The program follows the sequence of instructions. It typically takes a time span of tens of milliseconds for the processor to evaluate all the instructions and update the status of all outputs. If the system contains remote I/O—for example, an external rack with I/O modules—then that introduces additional uncertainty in the response time of the PLC system. As PLCs became more advanced, methods were developed to change the sequence of ladder execution, and subroutines were implemented. Special-purpose I/O modules may be used where the scan time of the PLC is too long to allow predictable performance. Precision timing modules, or counter modules for use with shaft encoders, are used where the scan time would be too long to reliably count pulses or detect the sense of rotation of an encoder. This allows even a relatively slow PLC to still interpret the counted values to control a machine, as the accumulation of pulses is done by a dedicated module that is unaffected by the speed of program execution. == Security == In his book from 1998, E. A. Parr pointed out that even though most programmable controllers require physical keys and passwords, the lack of strict access control and version control systems, as well as an easy-to-understand programming language make it likely that unauthorized changes to programs will happen and remain unnoticed. Prior to the discovery of the Stuxnet computer worm in June 2010, the security of PLCs received little attention. Modern programmable controllers generally contain real-time operating systems, which can be vulnerable to exploits in a similar way as desktop operating systems, like Microsoft Windows. PLCs can also be attacked by gaining control of a computer they communicate with. Since 2011, these concerns have grown – networking is becoming more commonplace in the PLC environment, connecting the previously separated plant floor networks and office networks. In February 2021, Rockwell Automation publicly disclosed a critical vulnerability affecting its Logix controllers family. The secret cryptographic key used to verify communication between the PLC and workstation could be extracted from the programming software (Studio 5000 Logix Designer) and used to remotely change program code and configuration of a connected controller. The vulnerability was given a severity score of 10 out of 10 on the CVSS vulnerability scale. At the time of writing, the mitigation of the vulnerability was to limit network access to affected devices. == Safety PLCs == Safety PLCs can be either a standalone device or a safety-rated hardware and functionality added to existing controller architectures (Allen-Bradley GuardLogix, Siemens F-series, etc.). These differ from conventional PLC types by being suitable for safety-critical applications for which PLCs have traditionally been supplemented with hard-wired safety relays and areas of the memory dedicated to the safety instructions. The standard of safety level is the SIL. A safety PLC might be used to control access to a robot cell with trapped-key access, or to manage the shutdown response to an emergency stop button on a conveyor production line. Such PLCs typically have a restricted regular instruction set augmented with safety-specific instructions designed to interface with emergency stop buttons, light screens, and other safety-related devices. The flexibility that such systems offer has resulted in rapid growth of demand for these controllers. == PLC compared with other control systems == PLCs are well adapted to a range of automation tasks. These are typically industrial processes in manufacturing where the cost of developing and maintaining the automation system is high relative to the total cost of the automation, and where changes to the system would be expected during its operational life. PLCs contain input and output devices compatible with industrial pilot devices and controls; little electrical design is required, and the design problem centers on expressing the desired sequence of operations. PLC applications are typically highly customized systems, so the cost of a packaged PLC is low compared to the cost of a specific custom-built controller design. On the other hand, in the case of mass-produced goods, customized control systems are economical. This is due to the lower cost of the components, which can be optimally chosen instead of a "generic" solution, and where the non-recurring engineering charges are spread over thousands or millions of units. Programmable controllers are widely used in motion, positioning, or torque control. Some manufacturers produce motion control units to be integrated with PLC so that G-code (involving a CNC machine) can be used to instruct machine movements. === PLC chip / embedded controller === These are for small machines and systems with low or medium volume. They can execute PLC languages such as Ladder, Flow-Chart/Grafcet, etc. They are similar to traditional PLCs, but their small size allows developers to design them into custom printed circuit boards like a microcontroller, without computer programming knowledge, but with a language that is easy to use, modify and maintain. They sit between the classic PLC / micro-PLC and microcontrollers. === Microcontrollers === A microcontroller-based design would be appropriate where hundreds or thousands of units will be produced and so the development cost (design of power supplies, input/output hardware, and necessary testing and certification) can be spread over many sales, and where the end-user would not need to alter the control. Automotive applications are an example; millions of units are built each year, and very few end-users alter the programming of these controllers. However, some specialty vehicles such as transit buses economically use PLCs instead of custom-designed controls, because the volumes are low and the development cost would be uneconomical. === Single-board computers === Very complex process control, such as those used in the chemical industry, may require algorithms and performance beyond the capability of even high-performance PLCs. Very high-speed or precision controls may also require customized solutions; for example, aircraft flight controls. Single-board computers using semi-customized or fully proprietary hardware may be chosen for very demanding control applications where the high development and maintenance cost can be supported. "Soft PLCs" running on desktop-type computers can interface with industrial I/O hardware while executing programs within a version of commercial operating systems adapted for process control needs. The rising popularity of single board computers has also had an influence on the development of PLCs. Traditional PLCs are generally closed platforms, but some newer PLCs (e.g. groov EPIC from Opto 22, ctrlX from Bosch Rexroth, PFC200 from Wago, PLCnext from Phoenix Contact, and Revolution Pi from Kunbus) provide the features of traditional PLCs on an open platform. === Programmable logic relays (PLR) === In more recent years, small products called programmable logic relays (PLRs) or smart relays, have become more common and accepted. These are similar to PLCs and are used in light industries where only a few points of I/O are needed, and low cost is desired. These small devices are typically made in a common physical size and shape by several manufacturers and branded by the makers of larger PLCs to fill their low-end product range. Most of these have 8 to 12 discrete inputs, 4 to 8 discrete outputs, and up to 2 analog inputs. Most such devices include a tiny postage stamp-sized LCD screen for viewing simplified ladder logic (only a very small portion of the program being visible at a given time) and status of I/O points, and typically these screens are accompanied by a 4-way rocker push-button plus four more separate push-buttons, similar to the key buttons on a VCR remote control, and used to navigate and edit the logic. Most have an RS-232 or RS-485 port for connecting to a PC so that programmers can use user-friendly software for programming instead of the small LCD and push-button set for this purpose. Unlike regular PLCs that are usually modular and greatly expandable, the PLRs are usually not modular or expandable, but their cost can be significantly lower than that a PLC, and they still offer robust design and deterministic execution of the logic. A variant of PLCs, used in remote locations is the remote terminal unit or RTU. An RTU is typically a low power, ruggedized PLC whose key function is to manage the communications links between the site and the central control system (typically SCADA) or in some modern systems, "The Cloud". Unlike factory automation using wired communication protocols such as Ethernet, communications links to remote sites are often radio-based and are less reliable. To account for the reduced reliability, RTU will buffer messages or switch to alternate communications paths. When buffering messages, the RTU will timestamp each message so that a full history of site events can be reconstructed. RTUs, being PLCs, have a wide range of I/O and are fully programmable, typically with languages from the IEC 61131-3 standard that is common to many PLCs, RTUs and DCSs. In remote locations, it is common to use an RTU as a gateway for a PLC, where the PLC is performing all site control and the RTU is managing communications, time-stamping events and monitoring ancillary equipment. On sites with only a handful of I/O, the RTU may also be the site PLC and will perform both communications and control functions. == See also == 1-bit computing Industrial control system PLC technician == References == === Bibliography === == Further reading == Daniel Kandray, Programmable Automation Technologies, Industrial Press, 2010 ISBN 978-0-8311-3346-7, Chapter 8 Introduction to Programmable Logic Controllers Walker, Mark John (2012-09-08). The Programmable Logic Controller: its prehistory, emergence and application (PDF) (PhD thesis). Department of Communication and Systems Faculty of Mathematics, Computing and Technology: The Open University. Archived (PDF) from the original on 2018-06-20. Retrieved 2018-06-20.
https://en.wikipedia.org/wiki/Programmable_logic_controller
A+ is a high-level, interactive, interpreted array programming language designed for numerically intensive applications, especially those found in financial applications. == History == In 1985, Arthur Whitney created the A programming language to replace APL. Other developers at Morgan Stanley extended it to A+, adding a graphical user interface (GUI) and other language features. The GUI A+ was released in 1988. Arthur Whitney went on to create a proprietary array language named K. Like J, K omits the APL character set. It lacks some of the perceived complexities of A+, such as the existence of statements and two different modes of syntax. == Features == A+ provides an extended set of functions and operators, a graphical user interface with automatic synchronizing of widgets and variables, asynchronous executing of functions associated with variables and events, dynamic loading of user compiled subroutines, and other features. A+ runs on many Unix variants, including Linux. It is free and open source software released under a GNU General Public License. A newer GUI has not yet been ported to all supported platforms. The A+ language implements the following changes to the APL language: an A+ function may have up to nine formal parameters A+ code statements are separated by semicolons, so a single statement may be divided into two or more physical lines The explicit result of a function or operator is the result of the last statement executed A+ implements an object called a dependency, which is a global variable (the dependent variable) and an associated definition that is like a function with no arguments. Values can be explicitly set and referenced in exactly the same ways as for a global variable, but they can also be set through the associated definition. Interactive A+ development is primarily done in the Xemacs editor, through extensions to the editor. Because A+ code uses the original APL symbols, displaying A+ requires a font with those special characters; a font named kapl is provided on the web site for that purpose. == References == == External links == Official website, aplusdev.org
https://en.wikipedia.org/wiki/A%2B_(programming_language)
Logo is an educational programming language, designed in 1967 by Wally Feurzeig, Seymour Papert, and Cynthia Solomon. The name was coined by Feurzeig while he was at Bolt, Beranek and Newman, and derives from the Greek logos, meaning 'word' or 'thought'. A general-purpose language, Logo is widely known for its use of turtle graphics, in which commands for movement and drawing produced line or vector graphics, either on screen or with a small robot termed a turtle. The language was conceived to teach concepts of programming related to Lisp and only later to enable what Papert called "body-syntonic reasoning", where students could understand, predict, and reason about the turtle's motion by imagining what they would do if they were the turtle. There are substantial differences among the many dialects of Logo, and the situation is confused by the regular appearance of turtle graphics programs that are named Logo. Logo is a multi-paradigm adaptation and dialect of Lisp, a functional programming language. There is no standard Logo, but UCBLogo has the facilities for handling lists, files, I/O, and recursion in scripts, and can be used to teach all computer science concepts, as UC Berkeley lecturer Brian Harvey did in his Computer Science Logo Style trilogy. Logo is usually an interpreted language, although compiled Logo dialects (such as Lhogho and Liogo) have been developed. Logo is not case-sensitive but retains the case used for formatting purposes. == History == Logo was created in 1967 at Bolt, Beranek and Newman (BBN), a Cambridge, Massachusetts, research firm, by Wally Feurzeig, Cynthia Solomon, and Seymour Papert. Its intellectual roots are in artificial intelligence, mathematical logic and developmental psychology. For the first four years of Logo research, development and teaching work was done at BBN. The first implementation of Logo, called Ghost, was written in LISP on a PDP-1. The goal was to create a mathematical land where children could play with words and sentences. Modeled on LISP, the design goals of Logo included accessible power and informative error messages. The use of virtual Turtles allowed for immediate visual feedback and debugging of graphic programming. The first working Logo turtle robot was created in 1969. A display turtle preceded the physical floor turtle. Modern Logo has not changed very much from the basic concepts predating the first turtle. The first turtle was a tethered floor roamer, not radio-controlled or wireless. At BBN Paul Wexelblat developed a turtle named Irving that had touch sensors and could move forwards, backwards, rotate, and ding its bell. The earliest year-long school users of Logo were in 1968–69 at Muzzey Jr. High in Lexington, Massachusetts. The virtual and physical turtles were first used by fifth-graders at the Bridge School in the same city in 1970–71. == Turtle and graphics == Logo's most-known feature is the turtle (derived originally from a robot of the same name), an on-screen "cursor" that shows output from commands for movement and small retractable pen, together producing line graphics. It has traditionally been displayed either as a triangle or a turtle icon (though it can be represented by any icon). Turtle graphics were added to the Logo language by Seymour Papert in the late 1960s to support Papert's version of the turtle robot, a simple robot controlled from the user's workstation that is designed to carry out the drawing functions assigned to it using a small retractable pen set into or attached to the robot's body. As a practical matter, the use of turtle geometry instead of a more traditional model mimics the actual movement logic of the turtle robot. The turtle moves with commands that are relative to its own position, LEFT 90 means spin left by 90 degrees. Some Logo implementations, particularly those that allow the use of concurrency and multiple turtles, support collision detection and allow the user to redefine the appearance of the turtle cursor, essentially allowing the Logo turtles to function as sprites. Turtle geometry is also sometimes used in environments other than Logo as an alternative to a strictly coordinate-addressed graphics system. For instance, the idea of turtle graphics is also useful in Lindenmayer system for generating fractals. == Implementations == Some modern derivatives of Logo allow thousands of independently moving turtles. There are two popular implementations: Massachusetts Institute of Technology's StarLogo and Northwestern University Center for Connected Learning's (CCL) NetLogo. They allow exploring emergent phenomena and come with many experiments in social studies, biology, physics, and other areas. NetLogo is widely used in agent-based simulation in the biological and social sciences. Although there is no agreed-upon standard, there is a broad consensus on core aspects of the language. In March 2020, there were counted 308 implementations and dialects of Logo, each with its own strengths. Most of those 308 are no longer in wide use, but many are still under development. Commercial implementations widely used in schools include MicroWorlds Logo and Imagine Logo. Legacy and current implementations include: First released in 1980s Apple Logo for the Apple II Plus and Apple Logo Writer for the Apple IIe, developed by Logo Computer Systems, Inc. (LCSI), were the most broadly used and prevalent early implementations of Logo that peaked in the early to mid-1980s. Aquarius LOGO was released in 1982 on cartridge by Mattel for the Aquarius home computer. Atari Logo, developed by LCSI, was released on cartridge by Atari, Inc. in 1983 for the Atari 8-bit computers. Color Logo was released in 1983 on cartridge (26–2722) and disk (26–2721) by Tandy for the TRS-80 Color Computer. Commodore Logo was released, with the subtitle "A Language for Learning", by Commodore International. It was based on MIT Logo and enhanced by Terrapin, Inc. The Commodore 64 version (C64105) was released on diskette in 1983; the Plus/4 version (T263001) was released on cartridge in 1984. SmartLOGO was released on cassette by Coleco for the ADAM home computer in 1984. It was developed by LCSI and included a primer, Turtle Talk, by Seymour Papert. ExperLogo was released in 1985 on diskette by Expertelligence Inc. for the Macintosh 128K. Hot-Logo was released in the mid-1980s by EPCOM for the MSX 8-bit computers with its own set of commands in Brazilian Portuguese. TI Logo (for the TI-99/4A computer) was used in primary schools, emphasizing Logo's usefulness in teaching computing fundamentals to novice programmers. Sprite Logo, also developed by Logo Computer Systems Inc., had ten turtles that could run as independent processes. It ran on Apple II computers, with the aid of a Sprite Card inserted in one of the computer's slots. IBM marketed their own version of Logo (P/N 6024076), developed jointly by Logo Computer Systems, Inc. (LCSI), for their then-new IBM PC. ObjectLOGO is a variant of Logo with object-oriented programming extensions and lexical scoping. Version 2.7 was sold by Digitool, Inc. It is no longer being developed or supported, and does not run on versions of the Mac operating system later than 7.5. Dr. Logo was developed by Digital Research and distributed with computers including the IBM PCjr, Atari ST and the Amstrad CPC. Acornsoft Logo was released in 1985. It is a commercial implementation of Logo for the 8-bit BBC Micro and Acorn Electron computers. It was developed for Acorn Computers as a full implementation of Logo. It features multiple screen turtles and four-channel sound. It was provided on two 16kB ROMs, with utilities and drivers as accompanying software. Lego Logo is a version of Logo that can manipulate robotic Lego bricks attached to a computer. It was implemented on the Apple II and used in American and other grade schools in the late 1980s and early 1990s. Lego Logo is a precursor to Scratch. First released in 1990s In February 1990, Electron User published Timothy Grantham's simple implementation of Logo for the Acorn Electron under the article "Talking Turtle". Comenius Logo is an implementation of Logo developed by Comenius University Faculty of Mathematics and Physics. It started development in December 1991, and is also known in other countries as SuperLogo, MultiLogo and MegaLogo. UCBLogo, also known as Berkeley Logo, is a free, cross-platform implementation of standard Logo last released in 2009. George Mills at MIT used UCBLogo as the basis for MSWLogo which is more refined and also free. Jim Muller wrote a book, The Great Logo Adventure, which was a complete Logo manual and which used MSWLogo as the demonstration language. MSWLogo has evolved into FMSLogo. First released from 2000 onwards aUCBLogo is a rewrite and enhancement of UCBLogo. Imagine Logo is a successor of Comenius Logo, implemented in 2000. The English version was released by Logotron Ltd. in 2001. LibreLogo is an extension to some versions of LibreOffice. Released in 2012, it is written in Python. It allows vector graphics to be written in Writer. Logo3D is a tridimensional version of Logo. POOL is a dialect of Logo with object-oriented extensions, implemented in 2014. POOL programs are compiled and run in the graphical IDE on Microsoft Windows. A simplified, cross-platform environment is available for systems supporting .NET Framework. QLogo is an open-source and cross-platform rewrite of UCBLogo with nearly full UCB compatibility that uses hardware-accelerated graphics. Lynx is an online version of Logo developed by Logo Computer Systems Inc. It can run a large number of turtles, supports animation, parallel processes, colour and collision detection. LogoMor is an open-source online 3D Logo interpreter based on JavaScript and p5.js. It supports 3D drawings, animations, multimedia, 3D models and various tools. It also includes a fully-featured code editor based on CodeMirror LbyM is an open-source online Logo interpreter based on JavaScript, created and actively developed (as of 2021) for Sonoma State University's Learning by Making program. It features traditional Logo programming, connectivity with a customized microcontroller and integration with a modern code editor. == Influence == Logo was a primary influence on the Smalltalk programming language. It is also the main influence on the Etoys educational programming environment and language, which is essentially a Logo variant written in Squeak (itself a variant of Smalltalk). Logo influenced the procedure/method model in AgentSheets and AgentCubes to program agents similar to the notion of a turtle in Logo. Logo provided the underlying language for Boxer. Boxer was developed at University of California, Berkeley and MIT and is based on a literacy model, making it easier to use for nontechnical people. KTurtle is a variation of Logo implemented at Qt for the KDE environment loosely based on Logo. Two more results of Logo's influence are Kojo, a variant of Scala, and Scratch, a visual, drag-and-drop language which runs in a web browser. == References == == Further reading == == External links == Media related to Logo (programming language) at Wikimedia Commons Logo Programming at Wikibooks
https://en.wikipedia.org/wiki/Logo_(programming_language)
A strict programming language is a programming language that only allows strict functions (functions whose parameters must be evaluated completely before they may be called) to be defined by the user. A non-strict programming language allows the user to define non-strict functions, and hence may allow lazy evaluation. In most non-strict languages, the non-strictness extends to data constructors. == Description == A strict programming language is a programming language which employs a strict programming paradigm, allowing only strict functions (functions whose parameters must be evaluated completely before they may be called) to be defined by the user. A non-strict programming language allows the user to define non-strict functions, and hence may allow lazy evaluation. Non-strictness has several disadvantages which have prevented widespread adoption: Because of the uncertainty regarding if and when expressions will be evaluated, non-strict languages generally must be purely functional to be useful. All hardware architectures in common use are optimized for strict languages, so the best compilers for non-strict languages produce slower code than the best compilers for strict languages. Space complexity of non-strict programs is difficult to understand and predict. In many strict languages, some advantages of non-strict functions can be obtained through the use of macros or thunks. Strict programming languages are often associated with eager evaluation, and non-strict languages with lazy evaluation, but other evaluation strategies are possible in each case. The terms "eager programming language" and "lazy programming language" are often used as synonyms for "strict programming language" and "non-strict programming language" respectively. == Examples == Nearly all programming languages in common use today are strict. Examples include C#, Java, Perl (all versions, i.e. through version 5 and version 7), Python, Ruby, Common Lisp, and ML. Some strict programming languages include features that mimic laziness. Raku (formerly known as Perl 6) has lazy lists, Python has generator functions, and Julia provides a macro system to build non-strict functions, as does Scheme. Examples for non-strict languages are Haskell, R, Miranda, and Clean. == Extension == In most non-strict languages, the non-strictness extends to data constructors. This allows conceptually infinite data structures (such as the list of all prime numbers) to be manipulated in the same way as ordinary finite data structures. It also allows for the use of very large but finite data structures such as the complete game tree of chess. == Citations == == References ==
https://en.wikipedia.org/wiki/Strict_programming_language
CBS Broadcasting Inc., commonly shortened to CBS (an abbreviation of its original name, Columbia Broadcasting System), is an American commercial broadcast television and radio network serving as the flagship property of the CBS Entertainment Group division of Paramount Global and is one of the company's three flagship subsidiaries, along with namesake Paramount Pictures and MTV. Founded in 1927, headquartered at the CBS Building in New York City and being part of the "Big Three" television networks, CBS has major production facilities and operations at the CBS Broadcast Center and the headquarters of owner Paramount at One Astor Plaza (both also in that city) and Television City and the CBS Studio Center in Los Angeles. It is sometimes referred to as the Eye Network, after the company's trademark symbol of an eye (which has been in use since October 20, 1951), and also the Tiffany Network, which alludes to the perceived high quality of its programming during the tenure of William S. Paley (and can also refer to some of CBS's first demonstrations of color television, which were held in the former Tiffany and Company Building in New York City in 1950). == History == The network has its origins in United Independent Broadcasters, Inc., a radio network founded in Chicago by New York City talent agent Arthur Judson in January 1927. In April of that year, the Columbia Phonograph Company, parent of Columbia Records' record label, invested in the network, resulting in its rebranding as the Columbia Phonographic Broadcasting System (CPBS). In early 1928, Judson and Columbia sold the network to Isaac and Leon Levy, two brothers who owned WCAU, the network's Philadelphia affiliate, as well as their partner Jerome Louchheim. They installed William S. Paley, an in-law of the Levys, as president of the network. With the Columbia record label out of ownership, Paley rebranded the network as the Columbia Broadcasting System. By September 1928, Paley became the network's majority owner with 51 percent of the business. Paramount Pictures then acquired the other 49 percent of CBS in 1929, but the Great Depression eventually forced the studio to sell its shares back to the network in 1932. CBS would then remain primarily an independent company throughout the next 63 years. Under Paley's guidance, CBS would first become one of the largest radio networks in the United States and eventually one of the Big Three American broadcast television networks. CBS ventured and expanded its horizons through television starting in the 1940s, spinning off its broadcast syndication division Viacom to a separate company in 1971. In 1974, CBS dropped its original full name and became known simply as CBS, Inc. The company was listed on the New York Stock Exchange under the ticker symbol "CBS". The Westinghouse Electric Corporation acquired the network in 1994, renaming its legal name to the current CBS Broadcasting Inc. two years later, and in 1997 adopted the name of the company it had acquired to become CBS Corporation. In 1999, CBS came under the control of the original incarnation of Viacom, which was formed as a spin-off of CBS in 1971. In 2005, Viacom split itself into two separate companies and re-established CBS Corporation through the spin-off of its broadcast television, radio and select cable television and non-broadcasting assets, with the CBS network at its core. CBS Corporation was controlled by Sumner Redstone through National Amusements, which also controlled the second incarnation of Viacom until December 4, 2019, when the two separated companies agreed to re-merge to become ViacomCBS (now known as Paramount Global). Following the sale, CBS and its other broadcasting and entertainment assets were reorganized into a new division, CBS Entertainment Group. CBS operated the CBS Radio network until 2017 when it sold its radio division to Entercom (now known as Audacy, Inc. since 2021). Before this, CBS Radio mainly provided news and feature content for its portfolio of owned-and-operated radio stations in large and mid-sized markets, as well as its affiliated radio stations in various other markets. While CBS Corporation common shareholders (i.e. not the multiple-voting shares held by National Amusements) were given a 72% stake in the combined Entercom, CBS no longer owns or operates any radio stations directly; however, it still provides radio news broadcasts to its radio affiliates and the new owners of its former radio stations, and licenses the rights to use CBS trademarks under a long-term contract. The television network has over 240 owned-and-operated and affiliated television stations throughout the United States, some also available in Canada via pay-television providers or in border areas over-the-air. == Programming == As of 2013, CBS provides 87+1⁄2 hours of regularly scheduled network programming each week. The network provides 22 hours of primetime programming to affiliated stations Monday through Saturday from 8:00 p.m. to 11:00 p.m. and Sunday from 7:00 p.m. to 11:00 p.m. Eastern and Pacific time (7:00 p.m. to 10:00 p.m. Monday through Saturday and 6:00 p.m. to 10:00 p.m. on Sunday in Central/Mountain time). The network also provides daytime programming from 11:00 a.m. to 4:00 p.m. Eastern and Pacific weekdays (subtract 1 hour for all other time zones), including a half-hour break for local news and features game shows The Price Is Right and Let's Make a Deal and soap operas The Young and the Restless, The Bold and the Beautiful, and Beyond the Gates. CBS News programming includes CBS Mornings from 7:00 a.m. to 9:00 a.m. weekdays and CBS Saturday Morning in the same period on Saturdays; nightly editions of CBS Evening News; the Sunday political talk show Face the Nation; early morning news program CBS Morning News; and the newsmagazines 60 Minutes, CBS News Sunday Morning, and 48 Hours. On weeknights, CBS airs the talk show The Late Show with Stephen Colbert and the comedic game show After Midnight. CBS Sports programming is also provided most weekend afternoons. Due to the unpredictable length of sporting events, CBS occasionally delays scheduled primetime programs to allow the programs to air in their entirety, a practice most commonly seen with the NFL on CBS. In addition to rights to sports events from major sports organizations such as the NFL, PGA, and NCAA, CBS broadcasts the CBS Sports Spectacular, a sports anthology series that fills certain weekend afternoon time slots before (or in some cases, in place of) a major sporting event. === Daytime === CBS' daytime schedule is the longest among the major networks at 4+1⁄2 hours. It is the home of the long-running game show The Price Is Right, which began production in 1972 and is the longest continuously running daytime game show on network television. After being hosted by Bob Barker for 35 years, the show has been hosted since 2007 by actor and comedian Drew Carey. The network is also home to the current incarnation of Let's Make a Deal, hosted by singer and comedian Wayne Brady. CBS is the only commercial broadcast network that continues to broadcast daytime game shows. Notable game shows that once aired as part of the network's daytime lineup include Match Game, Tattletales, The $10/25,000 Pyramid, Press Your Luck, Card Sharks, Family Feud, and Wheel of Fortune. Past game shows that have had both daytime and prime time runs on the network include Beat the Clock and To Tell the Truth. Two long-running primetime-only games were the panel shows What's My Line? and I've Got a Secret. The network was also home to The Talk, a panel talk show similar in format to ABC's The View. It debuted in October 2010. The panel featured Sheryl Underwood, Amanda Kloots, Jerry O'Connell, Akbar Gbajabiamila, and Natalie Morales who served as moderator. The Talk officially ended its run on December 20, 2024. CBS Daytime airs three daytime soap operas each weekday: the hour-long series The Young and the Restless, which debuted in 1973, and the half-hour series The Bold and the Beautiful, which debuted in 1987 and hour-long series Beyond the Gates which debuted in 2025. CBS has long aired the most soap operas out of the Big Three networks, carrying 3+1⁄2 hours of soaps on its daytime lineup from 1977 to 2009, and still retains the longest daily schedule. Other than Guiding Light, notable daytime soap operas that once aired on CBS include As the World Turns, Love of Life, Search for Tomorrow, The Secret Storm, The Edge of Night, and Capitol. === Children's programming === CBS broadcast the live-action series Captain Kangaroo on weekday mornings from 1955 to 1982, and on Saturdays until 1984. From 1971 to 1986, CBS News produced a series of one-minute segments titled In the News, which aired between other Saturday morning programs. Otherwise, CBS's children's programming has mostly focused on animated series such as reruns of Mighty Mouse, Looney Tunes, and Tom and Jerry cartoons, as well as Scooby-Doo, Fat Albert and the Cosby Kids, Jim Henson's Muppet Babies, Garfield and Friends, and Teenage Mutant Ninja Turtles. In 1997, CBS premiered Wheel 2000, a children's version of the syndicated game show Wheel of Fortune which aired simultaneously on the Game Show Network. In September 1998, CBS began contracting the time out to other companies to provide programming and material for its Saturday morning schedule. The first of these outsourced blocks was the CBS Kidshow, which ran until 2000 and featured programming from Canadian studio Nelvana such as Anatole, Mythic Warriors, Rescue Heroes, and Flying Rhino Junior High. After its agreement with Nelvana ended, the network then entered into a deal with Nickelodeon to air programming from its Nick Jr. block beginning in September 2000, under the banner Nick Jr. on CBS. By the time of the deal, Nickelodeon and CBS were corporate sisters through the latter's then parent company Viacom as a result of its 2000 merger with CBS Corporation. From 2002 to 2005, live-action and animated Nickelodeon series aimed at older children also aired as part of the block under the name Nick on CBS. Following the Viacom-CBS split, the network decided to discontinue the Nickelodeon content deal. In March 2006, CBS entered into a three-year agreement with DIC Entertainment, which was acquired later that year by the Cookie Jar Group, to program the Saturday morning time slot as part of a deal that included distribution of select tape-delayed Formula One auto races. The KOL Secret Slumber Party on CBS replaced Nick Jr. on CBS that September, with the inaugural lineup featuring two new first-run live-action programs, one animated series that originally aired in syndication in 2005, and three shows produced before 2006. In mid-2007, KOL, the children's service of AOL, withdrew sponsorship from CBS' Saturday morning block, which was subsequently renamed KEWLopolis. Complementing CBS's 2007 lineup were Care Bears, Strawberry Shortcake, and Sushi Pack. On February 24, 2009, it was announced that CBS would renew its contract with Cookie Jar for another three seasons through 2012. On September 19, 2009, KEWLopolis was renamed Cookie Jar TV. On July 24, 2013, CBS agreed with Litton Entertainment, which already programmed a syndicated Saturday morning block exclusive to ABC stations and later produced a block for CBS' sister network The CW that received its debut the following year, to launch a new Saturday morning block featuring live-action reality-based lifestyle, wildlife, and sports series. The Litton-produced CBS Dream Team block, aimed at teenagers 13 to 16 years old, began broadcasting on September 28, 2013, replacing Cookie Jar TV. The block was renamed CBS WKND in 2023. === Specials === ==== Animated primetime holiday specials ==== CBS was the original broadcast network home of the animated primetime holiday specials based on the Peanuts comic strip, beginning with A Charlie Brown Christmas in 1965. Over 30 holiday Peanuts specials (each for a specific holiday such as Halloween) were broadcast on CBS until 2000 when the broadcast rights were acquired by ABC. CBS also aired several primetime animated specials based on the works of Dr. Seuss (Theodor Geisel), beginning with How the Grinch Stole Christmas in 1966, as well as several specials based on the Garfield comic strip during the 1980s (which led to Garfield getting his Saturday-morning cartoon on the network, Garfield and Friends, which ran from 1988 to 1995). Rudolph the Red-Nosed Reindeer, produced in stop motion by Rankin/Bass, has been another annual holiday staple of CBS; however, that special first aired on NBC in 1964. As of 2011, Rudolph and Frosty the Snowman was the only two pre-1990 animated specials remaining on CBS; the broadcast rights to the Charlie Brown specials are now held by Apple, The Grinch rights by NBC, and the rights to the Garfield specials by Boomerang. All of these animated specials, from 1973 to 1990, began with a fondly remembered seven-second animated opening sequence, in which the words "A CBS Special Presentation" were displayed in colorful lettering (the ITC Avant Garde typeface, widely used in the 1970s, was used for the title logo). The word "SPECIAL", in all caps and repeated multiple times in multiple colors, slowly zoomed out from the frame in a spinning counterclockwise motion against a black background, and rapidly zoomed back into frame as a single word, in white, at the end; the sequence was accompanied by a jazzy though majestic up-tempo fanfare with dramatic horns and percussion (which was edited incidental music from the CBS crime drama Hawaii Five-O, titled "Call to Danger" on the Capitol Records soundtrack LP). This opening sequence appeared immediately before all CBS specials of the period (such as the Miss USA pageants and the annual presentation of the Kennedy Center Honors), in addition to animated specials. ==== Classical music specials ==== CBS was also responsible for airing the series of Young People's Concerts, conducted by Leonard Bernstein. Telecast every few months between 1958 and 1972, first in black-and-white and then in color beginning in 1966, these programs introduced millions of children to classical music through the eloquent commentaries of Bernstein. The specials were nominated for several Emmy Awards, including two wins in 1961 and later in 1966, and were among the first programs ever broadcast from the Lincoln Center for the Performing Arts. Over the years, CBS has broadcast three different productions of Tchaikovsky's ballet The Nutcracker – two live telecasts of the George Balanchine New York City Ballet production in 1957 and 1958 respectively, a little-known German-American filmed production in 1965 (which was subsequently repeated three times and starred Edward Villella, Patricia McBride and Melissa Hayden), and beginning in 1977, the Mikhail Baryshnikov staging of the ballet, starring the Russian dancer along with Gelsey Kirkland – a version that would become a television classic, and remains so today (the broadcast of this production later moved to PBS). In April 1986, CBS presented a slightly abbreviated version of Horowitz in Moscow, a live piano recital by pianist Vladimir Horowitz, which marked his return to Russia after over 60 years. The recital was televised as an episode of CBS News Sunday Morning (televised at 9:00 a.m. Eastern Time in the U.S., as the recital was performed simultaneously at 4:00 p.m. in Russia). It was so successful that CBS repeated it a mere two months later by popular demand, this time on videotape, rather than live. In later years, the program was shown as a standalone special on PBS; the current DVD of the telecast omits the commentary by Charles Kuralt but includes additional selections not heard on the CBS telecast. In 1986, CBS telecast Carnegie Hall: The Grand Reopening in primetime, in what was then a rare move for a commercial broadcast network, since most primetime classical music specials were relegated to PBS and A&E by this time. The program was a concert commemorating the re-opening of Carnegie Hall after its complete renovation. A range of artists were featured, from classical conductor Leonard Bernstein to popular music singer Frank Sinatra. ==== Cinderella ==== To compete with NBC, which produced the televised version of the Mary Martin Broadway production of Peter Pan, CBS responded with a musical production of Cinderella, with music by Richard Rodgers and lyrics by Oscar Hammerstein II. Based upon the classic Charles Perrault fairy tale, it is the only Rodgers and Hammerstein musical to have been written for television. It was originally broadcast live in color on CBS on March 31, 1957, as a vehicle for Julie Andrews, who played the title role; that broadcast was seen by over 100 million people. It was subsequently remade by CBS in 1965, with Lesley Ann Warren, Stuart Damon, Ginger Rogers, and Walter Pidgeon among its stars; the remake also included the new song "Loneliness of Evening", which was originally composed in 1949 for South Pacific but was not performed in that musical. This version was rebroadcast several times on CBS into the early 1970s, and is occasionally broadcast on various cable networks to this day; both versions are available on DVD. ==== National Geographic ==== CBS was also the original broadcast home for the primetime specials produced by the National Geographic Society. The Geographic series in the U.S. started on CBS in 1964, before moving to ABC in 1973 (the specials subsequently moved to PBS – under the production of Pittsburgh member station WQED – in 1975 and NBC in 1995, before returning to PBS in 2000). The specials have featured stories on many scientific figures such as Louis Leakey, Jacques Cousteau, and Jane Goodall, that not only featured their work but helped make them internationally known and accessible to millions. A majority of the specials were narrated by various actors, notably Alexander Scourby during the CBS run. The success of the specials led in part to the creation of the National Geographic Channel, a cable channel launched in January 2001 as a joint venture between the National Geographic Society and Fox Cable Networks. The specials' distinctive theme music, by Elmer Bernstein, was also adopted by the National Geographic Channel. ==== Other notable specials ==== From 1949 to 2002, the Pillsbury Bake-Off, an annual national cooking contest, was broadcast on CBS as a special. Hosts for the broadcast included Arthur Godfrey, Art Linkletter, Bob Barker, Gary Collins, Willard Scott (although under contract with CBS' rival NBC), and Alex Trebek. The Miss USA beauty pageant aired on CBS from 1963 to 2002, during a large portion of that period, the telecast was often emceed by the host of one of CBS's game shows including Bob Barker from 1967 to 1987 (at which point Barker, an animal rights activist who eventually convinced producers of The Price Is Right to cease offering fur coats as prizes on the program, quit in a dispute over their use), succeed by Alan Thicke in 1988, Dick Clark from 1989 to 1993, and Bob Goen from 1994 to 1996. The pageant's highest viewership was recorded in the early 1980s when it regularly topped the Nielsen ratings on the week of its broadcast. Viewership dropped sharply throughout the 1990s and 2000s, from an estimated viewership of 20 million to an average of 7 million from 2000 to 2001. In 2002, Donald Trump (owner of the Miss USA pageant's governing body, the Miss Universe Organization) brokered a new deal with NBC, giving it half-ownership of the Miss USA, Miss Universe and Miss Teen USA pageants and moving them to that network as part of an initial five-year contract, which began in 2003 and ended in 2015 after 12 years amid Trump's controversial remarks about Mexican immigrants during the launch of his 2016 campaign for the Republican presidential nomination. On June 1, 1977, it was announced that Elvis Presley had signed a deal with CBS to appear in a new television special. Under the agreement, CBS would videotape Presley's concerts during the summer of 1977; the special was filmed during Presley's final tour at stops in Omaha, Nebraska (on June 19) and Rapid City, South Dakota (on June 21 of that year). CBS aired the special, Elvis in Concert, on October 3, 1977, nearly two months after Presley died in his Graceland mansion on August 16. Since its inception in 1978, CBS has been the sole broadcaster of The Kennedy Center Honors, a two-hour performing arts tribute typically taped and edited in December for later broadcast during the holiday season. == Stations == CBS has 15 owned-and-operated stations, and current and pending affiliation agreements with 228 additional television stations encompassing 50 states, the District of Columbia, two U.S. possessions (Guam and the U.S. Virgin Islands) and Bermuda and St. Vincent and the Grenadines. The network has a national reach of 95.96% of all households in the United States (or 299,861,665 Americans with at least one television set). Currently, New Jersey, New Hampshire and Delaware are the only U.S. states where CBS does not have a locally licensed affiliate (New Jersey is served by New York City O&O WCBS-TV and Philadelphia O&O KYW-TV; Delaware is served by KYW and Salisbury, Maryland, affiliate WBOC-TV; and New Hampshire is served by Boston O&O WBZ-TV and Burlington, Vermont, affiliate WCAX-TV). CBS maintains affiliations with low-power stations (broadcasting either in analog or digital) in a few markets, such as Harrisonburg, Virginia (WSVF-CD), Palm Springs, California (KPSP-CD), and Parkersburg, West Virginia (WIYE-LD). In some markets, including both of those mentioned, these stations also maintain digital simulcasts on a subchannel of a co-owned/co-managed full-power television station. CBS also maintains a sizeable number of subchannel-only affiliations, the majority of which are with stations in cities located outside of the 50 largest Nielsen-designated markets; the largest CBS subchannel affiliate by market size is KOGG in Wailuku, Hawaii, which serves as a repeater of Honolulu affiliate KGMB (the sister station of KOGG parent KHNL). Nexstar Media Group is the largest operator of CBS stations by numerical total, owning 49 CBS affiliates (counting satellites); Tegna Media is the largest operator of CBS stations in terms of overall market reach, owning 15 CBS-affiliated stations (including affiliates in the larger markets in Houston, Tampa and Washington, D.C.) that reach 8.9% of the country. == Related services == === Video-on-demand services === CBS provides video-on-demand access for delayed viewing of the network's programming through various means, including via its website at CBS.com; the network's apps for iOS, Android, and newer version Windows devices; a traditional VOD service called CBS on Demand available on most traditional cable and IPTV providers; and through content deals with Amazon Video (which holds exclusive streaming rights to the CBS drama series Extant and Under the Dome) and Netflix. Notably, however, CBS is the only major broadcast network that does not provide recent episodes of its programming on Hulu (sister network The CW does offer its programming on the streaming service, albeit on a one-week delay after becoming available on the network's website on Hulu's free service, with users of its subscription service being granted access to newer episodes of CW series eight hours after their initial broadcast), due to concerns over cannibalizing viewership of some of the network's most prominent programs; however, episode back catalogs of certain past and present CBS series are available on the service through an agreement with CBS Television Distribution. Upon the release of the app in March 2013, CBS restricted streaming of the most recent episode of any of the network's programs on its streaming app for Apple iOS devices until eight days after their initial broadcast to encourage live or same-week (via both DVR and cable on demand) viewing; programming selections on the app were limited until the release of its Google Play and Windows 8 apps in October 2013, expanded the selections to include full episodes of all CBS series to which the network does not license the streaming rights to other services. ==== Paramount+ (formerly CBS All Access) ==== On October 28, 2014, CBS launched CBS All Access, an over-the-top subscription streaming service – priced at $5.99 per month ($9.99 with the no commercials option) – which allows users to view past and present episodes of CBS shows. Announced on October 16, 2014 (one day after HBO announced the launch of its over-the-top service HBO Now) as the first OTT offering by a USA broadcast television network, the service initially encompassed the network's existing streaming portal at CBS.com and its mobile app for smartphones and tablet computers; CBS All Access became available on Roku on April 7, 2015, and on Chromecast on May 14, 2015. In addition to providing full-length episodes of CBS programs, the service allows live programming streams of local CBS affiliates in 124 markets reaching 75% of the United States. CBS All Access offered the most recent episodes of the network's shows the day after their original broadcast, as well as complete back catalogs of most of its current series and a wide selection of episodes of classic series from the CBS Television Distribution and ViacomCBS Domestic Media Networks program library to subscribers of the service. CBS All Access also carried behind-the-scenes features from CBS programs and special events. Original programs aired on CBS All Access included Star Trek: Discovery, The Good Fight, and Big Brother: Over the Top. In December 2018, the service was launched in Australia under the name 10 All Access, due to its affiliation with CBS-owned free-to-air broadcaster Network 10. Due to local programming rights, not all content is shared with its U.S. counterpart, whilst the Australian version also features numerous full seasons of local Network 10 shows, all commercial-free. It was announced in September 2020 that the service would be rebranded as Paramount+ in early 2021, and would feature content from the wider ViacomCBS library following the re-merger between CBS and Viacom. The name was also extended to international markets and services such as 10 All Access. The rebrand to Paramount+ took place on March 4, 2021. === CBS HD === CBS' master feed is transmitted in 1080i high definition, the native resolution format for CBS Corporation's television properties. However, seven of its affiliates transmit the network's programming in 720p HD, while seven others carry the network feed in 480i standard definition either due to technical considerations for affiliates of other major networks that carry CBS programming on a digital subchannel or because a primary feed CBS affiliate has not yet upgraded their transmission equipment to allow content to be presented in HD. A small number of CBS stations and affiliates are also currently broadcasting at 1080p via an ATSC 3.0 multiplex station to simulcast a station's programming such as WNCN through WRDC in Durham, North Carolina, WTVF through WUXP-TV in Nashville, and KLAS-TV through KVCW in Las Vegas, Nevada. CBS began its conversion to high definition with the launch of its simulcast feed CBS HD in September 1998, at the start of the 1998–99 season. That year, CBS aired the first NFL game broadcast in high-definition, with the telecast of the New York Jets–Buffalo Bills game on November 8. CBS gradually converted much of its existing programming from standard definition to high definition beginning with the 2000–01 season, with select shows among that season's slate of freshmen scripted series being broadcast in HD starting with their debuts. The Young and the Restless became the first daytime soap opera to broadcast in HD on June 27, 2001. CBS' 14-year conversion to an entirely high-definition schedule ended in 2014, with Big Brother and Let's Make a Deal becoming the final two series to convert from 4:3 standard definition to HD (in contrast, NBC, Fox, and The CW were already airing their entire programming schedules – outside of Saturday mornings – in high definition by the 2010–11 season, while ABC was broadcasting its entire schedule in HD by the 2011–12 midseason). All of the network's programming has been presented in full HD since then (except for certain holiday specials produced before 2005 – such as the Rankin-Bass specials – which continue to be presented in 4:3 SD, although some have been remastered for HD broadcast). On September 1, 2016, when ABC converted to a 16:9 widescreen presentation, CBS and The CW were the only remaining networks that framed their promotions and on-screen graphical elements for a 4:3 presentation, though with CBS Sports' de facto 16:9 conversion with Super Bowl 50 and their new graphical presentation designed for 16:9 framing, in practice, most CBS affiliates ask pay-TV providers to pass down a 16:9 widescreen presentation by default over their standard definition channels. This continued for CBS until September 24, 2018, when the network converted its on-screen graphical elements to a 16:9 widescreen presentation for all non-news and sports programs. Litton Entertainment continues to frame the graphical elements in their programs for Dream Team within a 4:3 frame due to them being positioned for future syndicated sales, though all of its programming has been in high definition. == Branding == === Logos === The CBS television network's initial logo, used from the 1940s to 1951, consisted of an oval spotlight which shone on the block letters "CBS". The present-day Eye device was conceived by William Golden, based on a Pennsylvania Dutch hex sign and a Shaker drawing. While the logo is commonly attributed to Golden, some design work may have been done by CBS staff designer Georg Olden, one of the first African-Americans to attract some attention in the postwar graphic design field. The Eye device made its broadcast debut on October 20, 1951. The following season, as Golden prepared a new "ident", CBS President Frank Stanton insisted on keeping the Eye device and using it as much as possible. Golden died unexpectedly in 1959, and was replaced by Lou Dorfsman, one of his top assistants, who would go on to oversee all print and on-air graphics for CBS for the next 30 years. The CBS eye has since become a widely recognized symbol. While the logo has been used in different ways, the Eye device itself has never been redesigned. As part of a then-new graphical identity created by Trollbäck + Company that was used by the network during the 2006–2007 network television season, the eye was placed in a "trademark" position on show titles, days of the week and descriptive words, an approach highly respecting the value of the design. The logo is alternately known as the "Eyemark", a branding used for CBS's domestic television syndication division, under the Eyemark Entertainment name, in the mid-to-late 1990s after Westinghouse Electric bought CBS, but before the King World acquisition (which Eyemark was folded into), and subsequent merger with Viacom; Eyemark Entertainment was the result of the merger of MaXaM Entertainment (an independent television syndication firm which Westinghouse acquired shortly after its merger with CBS in 1996), Group W Productions (Westinghouse Broadcasting's own syndication division), & CBS Enterprises (CBS's syndication arm from the late 1960s to the early 1970s). The eye logo has served as inspiration for the logos of Associated Television (ATV) in the United Kingdom, Canal 4 in El Salvador, Televisa in Mexico, France 3, Latina Televisión in Peru, Fuji Television in Japan, Rede Bandeirantes and TV Globo in Brazil, and Canal 10 in Uruguay. In October 2011, the network celebrated the 60th anniversary of the introduction of the Eye logo, featuring special IDs of logo versions from previous CBS image campaigns being shown during the network's primetime lineup. CBS historically used a specially-commissioned variant of Didot, a close relative to Bodoni, as its corporate font until 2021. === Image campaigns === ==== 1980s ==== CBS has developed several notable image campaigns, and several of the network's most well-known slogans were introduced in the 1980s. The "Reach for the Stars" campaign used during the 1981–82 season features a space theme to capitalize on both CBS's stellar improvement in the ratings and the historic launch of the space shuttle Columbia. 1982's "Great Moments" juxtaposed scenes from classic CBS programs such as I Love Lucy with scenes from the network's then-current classics such as Dallas and M*A*S*H. From 1983 to 1986, CBS (by now firmly atop the ratings) featured a campaign based on the slogan "We've Got the Touch". Vocals for the campaign's jingle were contributed by Richie Havens (1983–84; one occasion in 1984–85) and Kenny Rogers (1985–86). The 1986–87 season ushered in the "Share the Spirit of CBS" campaign, the network's first to completely use computer graphics and digital video effects. Unlike most network campaign promos, the full-length version of "Share the Spirit" not only showed a brief clip preview of each new fall series but also utilized CGI effects to map out the entire fall schedule by night. The success of that campaign led to the 1987–88 "CBS Spirit" (or "CBSPIRIT") campaign. Like its predecessor, most "CBSpirit" promos utilized a procession of clips from the network's programs. However, the new graphic motif was a swirling (or "swishing") blue line that was used to represent "the spirit". The full-length promo, like the previous year, had a special portion that identified new fall shows, but the mapped-out fall schedule shot was abandoned. For the 1988–89 season, CBS unveiled a new image campaign officially known as "Television You Can Feel", but more commonly identified as "You Can Feel It On CBS". The goal was to convey a more sensual, new-age image through distinguished, advanced-looking computer graphics and soothing music, backgrounding images, and clips of emotionally powerful scenes and characters. However, it was this season in which CBS saw its ratings freefall, the deepest in the network's history. CBS ended the decade with "Get Ready for CBS", introduced with the 1989–90 season. The initial version was an ambitious campaign that attempted to elevate CBS out of last place (among the major networks); the motif centered around network stars interacting with each other in a remote studio set, getting ready for photo and television shoots, as well as for the new season on CBS. The high-energy promo song and the campaign's practices saw many customized variations by all of CBS's owned-and-operated stations and affiliates, which participated in the campaign per a network mandate. In addition, for the first time in history, CBS became the first broadcast network to partner with a national retailer (in this case, Kmart) to encourage viewership, with the "CBS/Kmart Get Ready Giveaway". ==== 1990s ==== For the 1990–91 season, the campaign featured a new jingle performed by the Temptations, which featured an altered version of their hit "Get Ready". The early 1990s featured less-than-memorable campaigns, with simplified taglines such as "This is CBS" (1992) and "You're on CBS" (1995). Eventually, the promotions department gained momentum again late in the decade with "Welcome Home to a CBS Night" (1996–1997), shortened to "Welcome Home" (1997–1999), and succeeded by the spin-off campaign "The Address is CBS" (1999–2000), whose history can be traced back to a CBS slogan from the radio era of the 1940s, "The Stars' Address is CBS". During the 1992 season for the end-of-show network identification sequence, a four-note sound mark was introduced, which was eventually adapted into the network's IDs and production company vanity cards following the closing credits of most of its programs during the "Welcome Home" era. ==== 2000s ==== Throughout the 2000s, CBS' rating resurgence was backed by the network's "It's All Here" campaign (which introduced updated versions of the 1992 sound mark used during certain promotions and production company vanity cards during the closing credits of programs); in 2005 campaign introduced the slogan "Everybody's Watching", the network's strategy led to the proclamation that it was "America's Most Watched Network". The network's 2006 campaign introduced the slogan "We Are CBS", with Don LaFontaine providing the voiceover for the IDs (as well as certain network promos) during this period. In 2009, the network introduced a campaign entitled "Only CBS", in which network promotions proclaim several unique qualities it has (the slogan was also used in program promotions following the announcement of the timeslot of a particular program). The "America's Most Watched Network" was re-introduced by CBS in 2011, used alongside the "Only CBS" slogan. ==== 2020s ==== In October 2020, CBS announced that it would begin to employ a more unified branding between the network and its divisions to strengthen brand awareness across platforms. The two main components of the rebranding are a "deconstructed eye" motif using the individual shapes of the eyemark (such as an animated station ID), and a five-note sound trademark developed by the audio design agency Antfood, phonetically resembling the "This is CBS" slogan. Alongside the rebranding, CBS Television Studios was renamed CBS Studios, and CBS Television Distribution was renamed CBS Media Ventures. The network also dropped the "America's Most Watched Network" and "Only CBS" taglines, with chief marketing officer Michael Benson explaining that they aimed to "be something where people feel like they are part of the family. It's tough to unify if you're bragging about yourself." Due to its programming being licensed to third-party streaming services, CBS programming began to carry a CBS Studios production logo based on the ident when applicable, and are billed with "CBS Original" or "CBS Presents" (specials) bylines in promotional material. As part of the rebranding, CBS News and CBS Sports also introduced new logos and imaging incorporating the deconstructed eye motif and sonic branding, with CBS News initially using it for coverage of the 2020 presidential election, and CBS Sports launching its rebrand ahead of Super Bowl LV in 2021. In December 2022, CBS News and Stations began to deploy the branding on the local news operations of CBS's owned-and-operated stations, with most now being branded as "CBS News (region)" to align themselves with CBS News and its chain of local streaming news channels (with some exceptions in markets with heritage station brands, such as KPIX) and adopting new graphics and music incorporating the eye motif and sound mark (replacing Frank Gari's "Enforcer" music package, which was based on a theme historically used by WBBM-TV). == International broadcasts == CBS programs are shown outside the United States: through various Paramount Global international networks and/or content agreements, and in two North American countries, through U.S.-based CBS stations. Sky News broadcasts the CBS Evening News on its channels serving the United Kingdom, Ireland, Australia, New Zealand, and Italy. === Canada === In Canada, CBS network programming is carried on cable, satellite, and IPTV providers through affiliates and owned-and-operated stations of the network that are located within proximity to the Canada–United States border (such as KIRO-TV in Seattle; KBJR-DT2 in Duluth, Minnesota: WWJ-TV in Detroit; WIVB-TV in Buffalo, New York; and WCAX-TV in Burlington, Vermont), some of which may also be receivable over-the-air in parts of southern Canada depending on the signal coverage of the station. Most programming is generally the same as it airs in the United States; however, some CBS programming on U.S.-based affiliates permitted for carriage by the Canadian Radio-television and Telecommunications Commission by Canadian cable and satellite providers are subject to simultaneous substitutions, a practice in which a pay television provider supplants an American station's signal with a feed from a Canadian station/network airing a particular program in the same time slot to protect domestic advertising revenue. === Bermuda === In Bermuda, CBS maintains an affiliation with Hamilton-based ZBM-TV, locally owned by Bermuda Broadcasting Company. === Mexico === CBS programming is available in Mexico through affiliates in markets located within proximity to the Mexico–United States border (such as KYMA-DT/Yuma, Arizona; KVTV/Laredo, Texas; KDBC-TV/El Paso, Texas; KVEO-DT2/Brownsville/Harlingen/McAllen, Texas; and KFMB-TV/San Diego), whose signals are readily receivable over-the-air in border areas of northern Mexico. === Central America, the Dominican Republic and the Caribbean === In Central America, the Dominican Republic and the Caribbean, many subscription providers carry either select U.S.-based CBS-affiliated stations or the main network feed from CBS O&Os WCBS-TV in New York City or WFOR-TV in Miami. In addition, network's programming has been available in the U.S. Virgin Islands since 2019 on WCVI-TV in Christiansted (owned by Lilly Broadcasting). === Ecuador === In Ecuador, many subscription providers carry either select U.S.-based CBS-affiliated stations or the main network feed from CBS O&Os WCBS-TV in New York City or WFOR-TV in Miami. === Peru === In Peru, many subscription providers carry either select U.S.-based CBS-affiliated stations or the main network feed from CBS O&Os WCBS-TV in New York City or WFOR-TV in Miami. === Venezuela === In Venezuela, many subscription providers carry either select U.S.-based CBS-affiliated stations or the main network feed from CBS O&Os WCBS-TV in New York City or WFOR-TV in Miami. === Colombia === In Colombia, many subscription providers carry either select U.S.-based CBS-affiliated stations or the main network feed from CBS O&Os WCBS-TV in New York City or WFOR-TV in Miami. === Guam === In the U.S. territory of Guam, the network is affiliated with low-power station KUAM-LP in Hagåtña. Entertainment and non-breaking news programming is shown day and date on a one-day broadcast delay, as Guam is located on the west side of the International Date Line (for example, NCIS, which airs on Tuesday nights, is carried on Wednesdays on KUAM-LP, and is advertised by the station as airing on the latter night in on-air promotions), with live programming and breaking news coverage airing as scheduled, meaning live sports coverage often airs early in the morning. === Puerto Rico === In the U.S. territory of Puerto Rico, CBS is carried through a special feed of Erie, Pennsylvania affiliate WSEE-TV, relayed via Mayagüez-based translator W22FA-D. === United Kingdom === On September 14, 2009, the international arm of CBS, CBS Studios International, reached a joint venture deal with Chellomedia to launch six CBS-branded channels in the United Kingdom – which would respectively replace Zone Romantica, Zone Thriller, Zone Horror, and Zone Reality, as well as timeshift services Zone Horror +1 and Zone Reality +1 – during the fourth quarter of that year. On October 1, 2009, it was announced that the first four channels, CBS Reality, CBS Reality +1, CBS Drama, and CBS Action (later CBS Justice), would launch on November 16 respectively replacing Zone Reality, Zone Reality +1, Zone Romantica and Zone Thriller. On April 5, 2010, Zone Horror and Zone Horror +1 were rebranded as Horror Channel and Horror Channel +1. CBS News and BBC News have maintained a news-sharing agreement since 2017, replacing the BBC's longtime agreement with ABC News and CBS' with Sky News (which would have ended in any event in 2018 due to that entity's purchase by NBCUniversal). As of the close of the Viacom merger on December 4, 2019, Channel 5 is now a sister operation to CBS, though no major changes to CBS' relationship with the BBC are expected shortly, as Channel 5 sub-contracts its news programming obligations to ITN. === Australia === Australian free-to-air broadcaster Network 10 has been owned by CBS Corporation since 2017 (and subsequently, Paramount Global). Network Ten's channels, 10, 10 Peach, 10 Bold, and Nickelodeon, all carry CBS programming, with the latter drawing extensively from the wider Paramount Global library including MTV and Nickelodeon. Before the acquisition, CBS had long been a major supplier of international programs to the network. The cost of maintaining program supply agreements with CBS and 21st Century Fox was a major factor in the network's unprofitability during the mid-2010s. Network Ten entered voluntary administration in June 2017. CBS Corporation was the network's largest creditor. CBS Corporation chose to acquire the network, completing the transaction in November 2017. === Asia === ==== Hong Kong ==== In Hong Kong, the CBS Evening News was broadcast live during the early morning hours on ATV; networks in that country maintain an agreement to rebroadcast portions of the program 12 hours after the initial broadcast to provide additional content in case their affiliates have insufficient news content to fill time during their local news programs. ==== Philippines ==== In the Philippines, CBS Evening News is broadcast on satellite network Q (a sister channel of GMA Network which is now GTV), while CBS This Morning is shown in that country on Lifestyle (now Metro Channel). Several CBS entertainment programs such as CSI, Late Show with David Letterman, and Survivor Series are broadcast by Studio 23 (now S+A) and Maxxx, which are both owned by ABS-CBN Corporation. 60 Minutes is currently broadcast on CNN Philippines as a part of their Stories block, which includes documentaries and is broadcast on Wednesday at 8:00 p.m. before CNN Philippines Nightly News with replays in a capacity as a stand-alone program on Saturdays at 8:00 a.m. & 5:00 pm and Sundays at 6:00 a.m, all in local time (UTC + 8). ==== India ==== In India, CBS maintained a brand licensing agreement with Reliance Broadcast Network Ltd. for three CBS-branded channels: Big CBS Prime, Big CBS Spark, and Big CBS Love. These channels were shut down in late November 2013. Following the CBS-Viacom merger, the Hindi-language general entertainment channel Colors TV became a sister network to CBS through the Viacom18 joint venture with TV18. ==== Israel ==== In Israel, in 2012 the channels Zone Reality and Zone Romantica were rebranded as CBS Reality and CBS Drama, respectively. The channels were carried by Israeli television providers Yes and Hot, although as of 2018 they both only carry CBS Reality. == Controversies == === Brown & Williamson interview === In 1995, CBS refused to air a 60 Minutes segment that featured an interview with a former president of research and development for Brown & Williamson, the U.S.'s third largest tobacco company. The controversy raised questions about the legal roles in decision-making and whether journalistic standards should be compromised despite legal pressures and threats. The decision nevertheless sent shockwaves throughout the television industry, the journalism community, and the country. This incident was the basis for the 1999 Michael Mann-directed drama film, The Insider. === Super Bowl XXXVIII halftime show controversy === In 2004, the Federal Communications Commission imposed a record $550,000 fine, the largest fine ever for a violation of federal decency laws, against CBS for an incident during its broadcast of Super Bowl XXXVIII in which singer Janet Jackson's right breast (which was partially covered by a piece of nipple jewelry) was briefly and accidentally exposed by guest performer Justin Timberlake at the end of a duet performance of Timberlake's 2003 single "Rock Your Body" during the halftime show (produced by then sister cable network MTV). Following the incident, CBS apologized to its viewers and denied foreknowledge of the incident, which was televised live. The incident resulted in a period of increased regulation of broadcast television and radio outlets (including self-imposed content regulation by networks and syndicators), which raised concerns surrounding censorship and freedom of speech, and resulted in the FCC voting to increase its maximum fine for indecency violations from US$27,500 to US$325,000. In 2008, a Philadelphia federal court annulled the fine imposed on CBS, labeling it "arbitrary and capricious". === Killian documents controversy === On September 8, 2004, less than two months before the Presidential election in which he defeated Democratic candidate John Kerry, CBS aired a controversial episode of 60 Minutes Wednesday, which questioned then-President George W. Bush's service in the Air National Guard in 1972 and 1973. Following allegations of forgery, CBS News admitted that four of the documents used in the story had not been properly authenticated and admitted that their source, Bill Burkett, had admitted to having "deliberately misled" a CBS News producer who worked on the report, about the documents' origins out of a confidentiality promise to the actual source. The following January, CBS fired four people connected to the preparation of the segment. Former CBS news anchor Dan Rather filed a $70 million lawsuit against CBS and former corporate parent Viacom in September 2007, contending the story, and his termination (he resigned as CBS News chief anchor in 2005), were mishandled. Parts of the suit were dismissed in 2008; subsequently in 2010, the entire suit was dismissed and Rather's motion to appeal was denied. === Hopper controversy === In January 2013, CNET named Dish Network's "Hopper with Sling" digital video recorder as a nominee for the CES "Best in Show" award (which is decided by CNET on behalf of its organizers, the Consumer Electronics Association), and named it the winner in a vote by the site's staff. However, CBS division CBS Interactive disqualified the Hopper and vetoed the results as CBS was in active litigation with Dish Network over its AutoHop technology (which allows users to skip commercial advertisements during recorded programs). CNET announced that it would no longer review any product or service provided by companies that CBS Corporation was in litigation with. The "Best in Show" award was instead given to the Razer Edge tablet. On January 14, 2013, CNET editor-in-chief Lindsey Turrentine said in a statement that its staff was in an "impossible" situation due to the conflict of interest posed by the lawsuit, and promised to prevent a similar incident from occurring again. The conflict also prompted the resignation of CNET senior writer Greg Sandoval. As a result of the controversy, the CEA announced on January 31, 2013, that CNET will no longer decide the CES Best in Show award winner due to the interference of CBS (with the position being offered to other technology publications), and the "Best in Show" award was jointly awarded to both the Hopper with Sling and Razer Edge. === Harassment allegations === In July 2018, an article by Ronan Farrow in The New Yorker claimed that thirty "current and former CBS employees described harassment, gender discrimination, or retaliation" at CBS and six women accused Les Moonves of harassment and intimidation. Following these allegations, it was reported on September 6, 2018, that CBS board members were negotiating Les Moonves's departure from the company. On September 9, 2018, The New Yorker reported that six additional women (in addition to the six original women reported in July) had raised accusations against Moonves, going back to the 1980s. Following this, Moonves resigned the same day as chief executive of CBS. == Presidents of CBS Entertainment == == See also == == Notes == == References == Ken Auletta (1992). Three Blind Mice: How the TV Networks Lost Their Way. New York City: Vintage. ISBN 0-679-74135-6. Ben H. Bagdikian (2000). The New Media Monopoly (6th ed.). Boston: Beacon Press. ISBN 0-8070-6179-4. Erik Barnouw (1966). A Tower in Babel: A History of Broadcasting in the United States to 1933. New York City: Oxford University Press. ISBN 978-0-19-500474-8. Erik Barnouw (1968). The Golden Web: A History of Broadcasting in the United States, 1933–1953. New York City: Oxford University Press. ISBN 978-0-19-500475-5. Edward J. Epstein (1973). News From Nowhere: Television and the News. New York City: Random House. ISBN 0-394-46316-1. Bernard Goldberg (2002). Bias: A CBS Insider Exposes How the Media Distorts the News. Washington, D.C.: Regnery. ISBN 0-89526-190-1. Jeff Kisseloff (1995). The Box: An Oral History of Television, 1920–1961. New York City: Viking. ISBN 0-670-86470-6. Barbara Matusow (1984). The Evening Stars: The Making of the Network News Anchor. New York City: Ballantine Books. ISBN 0-345-31714-9. William Paley (1979). As It Happened: A Memoir. Garden City, New York: Doubleday. ISBN 0-385-14639-6. Michael J. Robinson & Margaret Sheehan (1983). Over the Wire and On TV: CBS and the UPI in Campaign '80. New York City: Russell Sage Foundation. ISBN 0-87154-722-8. Sally Bedell Smith (1990). In All His Glory: The Life of William S. Paley, the Legendary Tycoon and His Brilliant Circle. New York City: Simon & Schuster. ISBN 0-671-61735-4. == Further reading == Lewis J. Paper (1987). Empire: William S. Paley and the Making of CBS. New York: St. Martin's Press. ISBN 0-312-00591-1. OCLC 15283845. == External links == Official website CBS's channel on YouTube CBS Eye-dentity Logo Guidelines website Columbia Broadcasting System — Western States Museum of Broadcasting
https://en.wikipedia.org/wiki/CBS
Total functional programming (also known as strong functional programming, to be contrasted with ordinary, or weak functional programming) is a programming paradigm that restricts the range of programs to those that are provably terminating. == Restrictions == Termination is guaranteed by the following restrictions: A restricted form of recursion, which operates only upon 'reduced' forms of its arguments, such as Walther recursion, substructural recursion, or "strongly normalizing" as proven by abstract interpretation of code. Every function must be a total (as opposed to partial) function. That is, it must have a definition for everything inside its domain. There are several possible ways to extend commonly used partial functions such as division to be total: choosing an arbitrary result for inputs on which the function is normally undefined (such as ∀ x ∈ N . x ÷ 0 = 0 {\displaystyle \forall x\in \mathbb {N} .x\div 0=0} for division); adding another argument to specify the result for those inputs; or excluding them by use of type system features such as refinement types. These restrictions mean that total functional programming is not Turing-complete. However, the set of algorithms that can be used is still huge. For example, any algorithm for which an asymptotic upper bound can be calculated (by a program that itself only uses Walther recursion) can be trivially transformed into a provably-terminating function by using the upper bound as an extra argument decremented on each iteration or recursion. For example, quicksort is not trivially shown to be substructural recursive, but it only recurs to a maximum depth of the length of the vector (worst-case time complexity O(n2)). A quicksort implementation on lists (which would be rejected by a substructural recursive checker) is, using Haskell: To make it substructural recursive using the length of the vector as a limit, we could do: Some classes of algorithms have no theoretical upper bound but do have a practical upper bound (for example, some heuristic-based algorithms can be programmed to "give up" after so many recursions, also ensuring termination). Another outcome of total functional programming is that both strict evaluation and lazy evaluation result in the same behaviour, in principle; however, one or the other may still be preferable (or even required) for performance reasons. In total functional programming, a distinction is made between data and codata—the former is finitary, while the latter is potentially infinite. Such potentially infinite data structures are used for applications such as I/O. Using codata entails the usage of such operations as corecursion. However, it is possible to do I/O in a total functional programming language (with dependent types) also without codata. Both Epigram and Charity could be considered total functional programming languages, even though they do not work in the way Turner specifies in his paper. So could programming directly in plain System F, in Martin-Löf type theory or the Calculus of Constructions. == See also == Termination analysis == References ==
https://en.wikipedia.org/wiki/Total_functional_programming
Prototype-based programming is a style of object-oriented programming in which behavior reuse (known as inheritance) is performed via a process of reusing existing objects that serve as prototypes. This model can also be known as prototypal, prototype-oriented, classless, or instance-based programming. Prototype-based programming uses the process generalized objects, which can then be cloned and extended. Using fruit as an example, a "fruit" object would represent the properties and functionality of fruit in general. A "banana" object would be cloned from the "fruit" object and general properties specific to bananas would be appended. Each individual "banana" object would be cloned from the generic "banana" object. Compare to the class-based paradigm, where a "fruit" class would be extended by a "banana" class. == History == The first prototype-based programming languages were Director a.k.a. Ani (on top of MacLisp) (1976-1979), and contemporaneously and not independently, ThingLab (on top of Smalltalk) (1977-1981), respective PhD projects by Kenneth Michael Kahn at MIT and Alan Hamilton Borning at Stanford (but working with Alan Kay at Xerox PARC). Borning introduced the word "prototype" in this context in his 1981 paper in ACM Transactions on Programming Languages and Systems (TOPLAS). Note however, that these were both inspired by Winograd and Bobrow's KRL by (1975-1976), who introduced the words and concepts of "prototype" and (multiple) "inheritance" in the related context of "Knowledge Representation"—for data rather than programs as such—itself based on Minsky's 1974 concept of Frames. The first prototype-based programming language with more than one implementer was probably Yale T Scheme (1981-1984), though like Director and ThingLab initially, it just speaks of objects without classes. The language that made the name and notion of prototypes popular was Self (1985-1995), developed by David Ungar and Randall Smith to research topics in object-oriented language design. Since the late 1990s, the classless paradigm has grown increasingly popular. Some current prototype-oriented languages are JavaScript (and other ECMAScript implementations such as JScript and Flash's ActionScript 1.0), Lua, Cecil, NewtonScript, Io, Ioke, MOO, REBOL and AHK. Since the 2010s, a new generation of languages with pure functional prototypes has appeared, that reduce OOP to its very core: Jsonnet is a dynamic lazy pure functional language with a builtin prototype object system using mixin inheritance; Nix is a dynamic lazy pure functional language that builds an equivalent object system (Nix "extensions") in just two short function definitions (plus many other convenience functions). Both languages are used to define large distributed software configurations (Jsonnet being directly inspired by GCL, the Google Configuration Language, with which Google defines all its deployments, and has similar semantics though with dynamic binding of variables). Since then, other languages like Gerbil Scheme have implemented pure functional lazy prototype systems based on similar principles. == Design and implementation == Etymologically, a "prototype" means "first cast" ("cast" in the sense of being manufactured). A prototype is a concrete thing, from which other objects can be created by copying and modifying. For example, the International Prototype of the Kilogram is an actual object that really exists, from which new kilogram-objects can be created by copying. In comparison, a "class" is an abstract thing, in which objects can belong. For example, all kilogram-objects are in the class of KilogramObject, which might be a subclass of MetricObject, and so on. Prototypal inheritance in JavaScript is described by Douglas Crockford as You make prototype objects, and then … make new instances. Objects are mutable in JavaScript, so we can augment the new instances, giving them new fields and methods. These can then act as prototypes for even newer objects. We don't need classes to make lots of similar objects… Objects inherit from objects. What could be more object oriented than that? Advocates of prototype-based programming argue that it encourages the programmer to focus on the behavior of some set of examples and only later worry about classifying these objects into archetypal objects that are later used in a fashion similar to classes. Many prototype-based systems encourage the alteration of prototypes during run-time, whereas only very few class-based object-oriented systems (such as the dynamic object-oriented system, Common Lisp, Dylan, Objective-C, Perl, Python, Ruby, or Smalltalk) allow classes to be altered during the execution of a program. Almost all prototype-based systems are based on interpreted and dynamically typed languages. Systems based on statically typed languages are technically feasible, however. The Omega language discussed in Prototype-Based Programming is an example of such a system, though according to Omega's website even Omega is not exclusively static, but rather its "compiler may choose to use static binding where this is possible and may improve the efficiency of a program." == Object construction == In prototype-based languages there are no explicit classes. Objects inherit directly from other objects through a prototype property. The prototype property is called prototype in Self and JavaScript, or proto in Io. There are two methods of constructing new objects: ex nihilo ("from nothing") object creation or through cloning an existing object. The former is supported through some form of object literal, declarations where objects can be defined at runtime through special syntax such as {...} and passed directly to a variable. While most systems support a variety of cloning, ex nihilo object creation is not as prominent. In class-based languages, a new instance is constructed through a class's constructor function, a special function that reserves a block of memory for the object's members (properties and methods) and returns a reference to that block. An optional set of constructor arguments can be passed to the function and are usually held in properties. The resulting instance will inherit all the methods and properties that were defined in the class, which acts as a kind of template from which similarly typed objects can be constructed. Systems that support ex nihilo object creation allow new objects to be created from scratch without cloning from an existing prototype. Such systems provide a special syntax for specifying the properties and behaviors of new objects without referencing existing objects. In many prototype languages there exists a root object, often called Object, which is set as the default prototype for all other objects created in run-time and which carries commonly needed methods such as a toString() function to return a description of the object as a string. One useful aspect of ex nihilo object creation is to ensure that a new object's slot (properties and methods) names do not have namespace conflicts with the top-level Object object. (In the JavaScript language, one can do this by using a null prototype, i.e. Object.create(null).) Cloning refers to a process whereby a new object is constructed by copying the behavior of an existing object (its prototype). The new object then carries all the qualities of the original. From this point on, the new object can be modified. In some systems the resulting child object maintains an explicit link (via delegation or resemblance) to its prototype, and changes in the prototype cause corresponding changes to be apparent in its clone. Other systems, such as the Forth-like programming language Kevo, do not propagate change from the prototype in this fashion and instead follow a more concatenative model where changes in cloned objects do not automatically propagate across descendants. For another example: == Delegation == In prototype-based languages that use delegation, the language runtime is capable of dispatching the correct method or finding the right piece of data simply by following a series of delegation pointers (from object to its prototype) until a match is found. All that is required to establish this behavior-sharing between objects is the delegation pointer. Unlike the relationship between class and instance in class-based object-oriented languages, the relationship between the prototype and its offshoots does not require that the child object have a memory or structural similarity to the prototype beyond this link. As such, the child object can continue to be modified and amended over time without rearranging the structure of its associated prototype as in class-based systems. It is also important to note that not only data, but also methods can be added or changed. For this reason, some prototype-based languages refer to both data and methods as "slots" or "members". == Concatenation == In concatenative prototyping - the approach implemented by the Kevo programming language - there are no visible pointers or links to the original prototype from which an object is cloned. The prototype (parent) object is copied rather than linked to and there is no delegation. As a result, changes to the prototype will not be reflected in cloned objects. Incidentally, the Cosmos programming language achieves the same through the use of persistent data structures. The main conceptual difference under this arrangement is that changes made to a prototype object are not automatically propagated to clones. This may be seen as an advantage or disadvantage. (However, Kevo does provide additional primitives for publishing changes across sets of objects based on their similarity — so-called family resemblances or clone family mechanism — rather than through taxonomic origin, as is typical in the delegation model.) It is also sometimes claimed that delegation-based prototyping has an additional disadvantage in that changes to a child object may affect the later operation of the parent. However, this problem is not inherent to the delegation-based model and does not exist in delegation-based languages such as JavaScript, which ensure that changes to a child object are always recorded in the child object itself and never in parents (i.e. the child's value shadows the parent's value rather than changing the parent's value). In simplistic implementations, concatenative prototyping will have faster member lookup than delegation-based prototyping (because there is no need to follow the chain of parent objects), but will conversely use more memory (because all slots are copied, rather than there being a single slot pointing to the parent object). More sophisticated implementations can avoid this problem, however, although trade-offs between speed and memory are required. For example, systems with concatenative prototyping can use a copy-on-write implementation to allow for behind-the-scenes data sharing — and such an approach is indeed followed by Kevo. Conversely, systems with delegation-based prototyping can use caching to speed up data lookup. == Criticism == Advocates of class-based object models who criticize prototype-based systems often have concerns similar to the concerns that proponents of static type systems for programming languages have of dynamic type systems (see datatype). Usually, such concerns involve correctness, safety, predictability, efficiency and programmer unfamiliarity. On the first three points, classes are often seen as analogous to types (in most statically typed object-oriented languages they serve that role) and are proposed to provide contractual guarantees to their instances, and to users of their instances, that they will behave in some given fashion. Regarding efficiency, declaring classes simplifies many compiler optimizations that allow developing efficient method and instance-variable lookup. For the Self language, much development time was spent on developing, compiling, and interpreting techniques to improve the performance of prototype-based systems versus class-based systems. A common criticism made against prototype-based languages is that the community of software developers is unfamiliar with them, despite the popularity and market permeation of JavaScript. However, knowledge about prototype-based systems is increasing with the proliferation of JavaScript frameworks and the complex use of JavaScript as the World Wide Web (Web) matures. ECMAScript 6 introduced classes as syntactic sugar over JavaScript's existing prototype-based inheritance, providing an alternative way to create objects and manage inheritance. == Languages supporting prototype-based programming == == See also == Class-based programming (contrast) Differential inheritance Programming paradigm == References == == Further reading == Abadi, Martin; Luca Cardelli (1996). A Theory of Objects. Springer-Verlag. ISBN 978-1-4612-6445-3. Class Warfare: Classes vs. Prototypes, by Brian Foote. Noble, James; Taivalsaari, Antero; Moore, Ivan, eds. (1999). Prototype-Based Programming: Concepts, Languages and Applications. Springer-Verlag. ISBN 981-4021-25-3. Using Prototypical Objects to Implement Shared Behavior in Object Oriented Systems, by Henry Lieberman, 1986.
https://en.wikipedia.org/wiki/Prototype-based_programming
Kotlin () is a cross-platform, statically typed, general-purpose high-level programming language with type inference. Kotlin is designed to interoperate fully with Java, and the JVM version of Kotlin's standard library depends on the Java Class Library, but type inference allows its syntax to be more concise. Kotlin mainly targets the JVM, but also compiles to JavaScript (e.g., for frontend web applications using React) or native code via LLVM (e.g., for native iOS apps sharing business logic with Android apps). Language development costs are borne by JetBrains, while the Kotlin Foundation protects the Kotlin trademark. On 7 May 2019, Google announced that the Kotlin programming language had become its preferred language for Android app developers. Since the release of Android Studio 3.0 in October 2017, Kotlin has been included as an alternative to the standard Java compiler. The Android Kotlin compiler emits Java 8 bytecode by default (which runs in any later JVM), but allows targeting Java 9 up to 20, for optimizing, or allows for more features; has bidirectional record class interoperability support for JVM, introduced in Java 16, considered stable as of Kotlin 1.5. Kotlin has support for the web with Kotlin/JS, through an intermediate representation-based backend which has been declared stable since version 1.8, released December 2022. Kotlin/Native (for e.g. Apple silicon support) has been declared stable since version 1.9.20, released November 2023. == History == === Name === The name is derived from Kotlin Island, a Russian island in the Gulf of Finland, near Saint Petersburg. Andrey Breslav, Kotlin's former lead designer, mentioned that the team decided to name it after an island, in imitation of the Java programming language which shares a name with the Indonesian island of Java. === Development === The first commit to the Kotlin Git repository was on November 8, 2010. In July 2011, JetBrains unveiled Project Kotlin, a new language for the JVM, which had been under development for a year. JetBrains lead Dmitry Jemerov said that most languages did not have the features they were looking for, with the exception of Scala. However, he cited the slow compilation time of Scala as a deficiency. One of the stated goals of Kotlin is to compile as quickly as Java. In February 2012, JetBrains open sourced the project under the Apache 2 license. JetBrains expected Kotlin to drive IntelliJ IDEA sales. Kotlin 1.0 was released on February 15, 2016. This is considered to be the first officially stable release and JetBrains has committed to long-term backwards compatibility starting with this version. At Google I/O 2017, Google announced first-class support for Kotlin on Android. Kotlin 1.1 was released on March 1, 2017. Kotlin 1.2 was released on November 28, 2017. Sharing code between JVM and JavaScript platforms feature was newly added to this release (multiplatform programming is by now a beta feature upgraded from "experimental"). A full-stack demo has been made with the new Kotlin/JS Gradle Plugin. Kotlin 1.3 was released on 29 October 2018, adding support for coroutines for use with asynchronous programming. On 7 May 2019, Google announced that the Kotlin programming language is now its preferred language for Android app developers. Kotlin 1.4 was released in August 2020, with e.g. some slight changes to the support for Apple's platforms, i.e. to the Objective-C/Swift interop. Kotlin 1.5 was released in May 2021. Kotlin 1.6 was released in November 2021. Kotlin 1.7 was released in June 2022, including the alpha version of the new Kotlin K2 compiler. Kotlin 1.8 was released in December 2022, 1.8.0 was released on January 11, 2023. Kotlin 1.9 was released in July 2023, 1.9.0 was released on July 6, 2023. Kotlin 2.0 was released in May 2024, 2.0.0 was released on May 21, 2024. Kotlin 2.1 was released in November 2024, 2.1.0 was released on November 27, 2024. == Design == Development lead Andrey Breslav has said that Kotlin is designed to be an industrial-strength object-oriented language, and a "better language" than Java, but still be fully interoperable with Java code, allowing companies to make a gradual migration from Java to Kotlin. Semicolons are optional as a statement terminator; in most cases a newline is sufficient for the compiler to deduce that the statement has ended. Kotlin variable declarations and parameter lists have the data type come after the variable name (and with a colon separator), similar to Ada, BASIC, Pascal, TypeScript and Rust. This, according to an article from Roman Elizarov, current project lead, results in alignment of variable names and is more pleasing to eyes, especially when there are a few variable declarations in succession, and one or more of the types is too complex for type inference, or needs to be declared explicitly for human readers to understand. The influence of Scala in Kotlin can be seen in the extensive support for both object-oriented and functional programming and in a number of specific features: there is a distinction between mutable and immutable variables (var vs val keyword) all classes are public and final (non-inheritable) by default functions and methods support default arguments, variable-length argument lists and named arguments Kotlin 1.3 added support for contracts, which are stable for the standard library declarations, but still experimental for user-defined declarations. Contracts are inspired by Eiffel's design by contract programming paradigm. Following ScalaJS, Kotlin code may be transpiled to JavaScript, allowing for interoperability between code written in the two languages. This can be used either to write full web applications in Kotlin, or to share code between a Kotlin backend and a JavaScript frontend. == Syntax == === Procedural programming style === Kotlin relaxes Java's restriction of allowing static methods and variables to exist only within a class body. Static objects and functions can be defined at the top level of the package without needing a redundant class level. For compatibility with Java, Kotlin provides a JvmName annotation which specifies a class name used when the package is viewed from a Java project. For example, @file:JvmName("JavaClassName"). === Main entry point === As in C, C++, C#, Java, and Go, the entry point to a Kotlin program is a function named "main", which may be passed an array containing any command-line arguments. This is optional since Kotlin 1.3. Perl, PHP, and Unix shell–style string interpolation is supported. Type inference is also supported. === Extension functions === Similar to C#, Kotlin allows adding an extension function to any class without the formalities of creating a derived class with new functions. An extension function has access to all the public interface of a class, which it can use to create a new function interface to a target class. An extension function will appear exactly like a function of the class and will be shown in code completion inspection of class functions. For example: By placing the preceding code in the top-level of a package, the String class is extended to include a lastChar function that was not included in the original definition of the String class. === Scope functions === Kotlin has five scope functions, which allow the changing of scope within the context of an object. The scope functions are let, run, with, apply, and also. === Unpack arguments with spread operator === Similar to Python, the spread operator asterisk (*) unpacks an array's contents as individual arguments to a function, e.g.: === Destructuring declarations === Destructuring declarations decompose an object into multiple variables at once, e.g. a 2D coordinate object might be destructured into two integers, x and y. For example, the Map.Entry object supports destructuring to simplify access to its key and value fields: === Nested functions === Kotlin allows local functions to be declared inside of other functions or methods. === Classes are final by default === In Kotlin, to derive a new class from a base class type, the base class needs to be explicitly marked as "open". This is in contrast to most object-oriented languages such as Java where classes are open by default. Example of a base class that is open to deriving a new subclass from it: === Abstract classes are open by default === Abstract classes define abstract or "pure virtual" placeholder functions that will be defined in a derived class. Abstract classes are open by default. === Classes are public by default === Kotlin provides the following keywords to restrict visibility for top-level declaration, such as classes, and for class members: public, internal, protected, and private. When applied to a class member: When applied to a top-level declaration: Example: === Primary constructor vs. secondary constructors === Kotlin supports the specification of a "primary constructor" as part of the class definition itself, consisting of an argument list following the class name. This argument list supports an expanded syntax on Kotlin's standard function argument lists that enables declaration of class properties in the primary constructor, including visibility, extensibility, and mutability attributes. Additionally, when defining a subclass, properties in super-interfaces and super-classes can be overridden in the primary constructor. However, in cases where more than one constructor is needed for a class, a more general constructor can be defined using secondary constructor syntax, which closely resembles the constructor syntax used in most object-oriented languages like C++, C#, and Java. === Sealed classes === Sealed classes and interfaces restrict subclass hierarchies, meaning more control over the inheritance hierarchy. Declaration of sealed interface and class: All the subclasses of the sealed class are defined at compile time. No new subclasses can be added to it after the compilation of the module having the sealed class. For example, a sealed class in a compiled jar file cannot be subclassed. === Data classes === Kotlin's data class construct defines classes whose primary purpose is storing data, similar to Java's record types. Like Java's record types, the construct is similar to normal classes except that the key methods equals, hashCode and toString are automatically generated from the class properties. Unlike Java's records, data classes are open for inheritance. === Kotlin interactive shell === === Kotlin as a scripting language === Kotlin can also be used as a scripting language. A script is a Kotlin source file using the .kts filename extension, with executable source code at the top-level scope: Scripts can be run by passing the -script option and the corresponding script file to the compiler. === Null safety === Kotlin makes a distinction between nullable and non-nullable data types. All nullable objects must be declared with a "?" postfix after the type name. Operations on nullable objects need special care from developers: a null-check must be performed before using the value, either explicitly, or with the aid of Kotlin's null-safe operators: ?. (the safe navigation operator) can be used to safely access a method or property of a possibly null object. If the object is null, the method will not be called and the expression evaluates to null. Example: ?: (the null coalescing operator) is a binary operator that returns the first operand, if non-null, else the second operand. It is often referred to as the Elvis operator, due to its resemblance to an emoticon representation of Elvis Presley. === Lambdas === Kotlin provides support for higher-order functions and anonymous functions, or lambdas. Lambdas are declared using braces, { }. If a lambda takes parameters, they are declared within the braces and followed by the -> operator. === "Hello world" example === (Taken from and explained at https://kotlinlang.org/docs/kotlin-tour-hello-world.html.) == Tools == Android Studio (based on IntelliJ IDEA) has official support for Kotlin, starting from Android Studio 3. Integration with common Java build tools is supported, including Apache Maven, Apache Ant, and Gradle. Emacs has a Kotlin Mode in its MELPA package repository. JetBrains also provides a plugin for Eclipse. IntelliJ IDEA has plug-in support for Kotlin. IntelliJ IDEA 15 was the first version to bundle the Kotlin plugin in the IntelliJ Installer, and to provide Kotlin support out of the box. Gradle: Kotlin has seamless integration with Gradle, which is a popular build automation tool. Gradle allows you to build, automate, and manage the lifecycle of your Kotlin projects efficiently == Applications == When Kotlin was announced as an official Android development language at Google I/O in May 2017, it became the third language fully supported for Android, after Java and C++. As of 2020, Kotlin was the most widely used language on Android, with Google estimating that 70% of the top 1,000 apps on the Play Store were written in Kotlin. Google itself had 60 apps written in Kotlin, including Maps and Drive. Many Android apps, such as Google Home, were in the process of being migrated to Kotlin, and therefore use both Kotlin and Java. Kotlin on Android is seen as beneficial for its null-pointer safety, as well as for its features that make for shorter, more readable code. In addition to its prominent use on Android, Kotlin was gaining traction in server-side development. The Spring Framework officially added Kotlin support with version 5, on 4 January 2017. To further support Kotlin, Spring has translated all its documentation to Kotlin, and added built-in support for many Kotlin-specific features such as coroutines. In addition to Spring, JetBrains has produced a Kotlin-first framework called Ktor for building web applications. In 2020, JetBrains found in a survey of developers who use Kotlin that 56% were using Kotlin for mobile apps, while 47% were using it for a web back-end. Just over a third of all Kotlin developers said that they were migrating to Kotlin from another language. Most Kotlin users were targeting Android (or otherwise on the JVM), with only 6% using Kotlin Native. == Adoption == In 2018, Kotlin was the fastest growing language on GitHub, with 2.6 times more developers compared to 2017. It is the fourth most loved programming language according to the 2020 Stack Overflow Developer Survey. Kotlin was also awarded the O'Reilly Open Source Software Conference Breakout Award for 2019. Many companies/organizations have used Kotlin for backend development: Allegro Amazon Atlassian Cash App Flux Google Gradle JetBrains Meshcloud Norwegian Tax Administration OLX Pivotal Rocket Travel Shazam Zalando Some companies/organizations have used Kotlin for web development: Barclay's Bank Data2viz Fritz2 JetBrains A number of companies have publicly stated they were using Kotlin: Basecamp Corda, a distributed ledger developed by a consortium of well-known banks (such as Goldman Sachs, Wells Fargo, J.P. Morgan, Deutsche Bank, UBS, HSBC, BNP Paribas, and Société Générale), has over 90% Kotlin code in its codebase. Coursera DripStat Duolingo Meta Netflix Pinterest Trello Uber == See also == Comparison of programming languages == References == This article contains quotations from Kotlin tutorials which are released under an Apache 2.0 license. == External links == Official website
https://en.wikipedia.org/wiki/Kotlin_(programming_language)
In computer science, bridging describes systems that map the runtime behaviour of different programming languages so they can share common resources. They are often used to allow "foreign" languages to operate a host platform's native object libraries, translating data and state across the two sides of the bridge. Bridging contrasts with "embedding" systems that allow limited interaction through a black box mechanism, where state sharing is limited or non-existent. Apple Inc. has made heavy use of bridging on several occasions, notably in early versions of Mac OS X which bridged to older "classic" systems using the Carbon system as well as Java. Microsoft's Common Language Runtime, introduced with the .NET Framework, was designed to be multi-language from the start, and avoided the need for extensive bridging solutions. Both platforms have more recently added new bridging systems for JavaScript, Apple's ObjC-to-JS and Microsoft's HTML Bridge. == Concepts == === Functions, libraries and runtimes === Most programming languages include the concept of a subroutine or function, a mechanism that allows commonly used code to be encapsulated and re-used throughout a program. For instance, a program that makes heavy use of mathematics might need to perform the square root calculation on various numbers throughout the program, so this code might be isolated in a sqrt(aNumber) function that is "passed in" the number to perform the square root calculation on, and "returns" the result. In many cases the code in question already exists, either implemented in hardware or as part of the underlying operating system the program runs within. In these cases the sqrt function can be further simplified by calling the built-in code. Functions often fall into easily identifiable groups of similar capabilities, mathematics functions for instance, or handling text files. Functions are often gathered together in collections known as libraries that are supplied with the system or, more commonly in the past, the programming language. Each language has its own method of calling functions so the libraries written for one language may not work with another; the semantics for calling functions in C is different from Pascal, so generally C programs cannot call Pascal libraries and vice versa. The commonly used solution to this problem is to pick one set of call semantics as the default system for the platform, and then have all programming languages conform to that standard. Most computer languages and platforms have generally added functionality that cannot be expressed in the call/return model of the function. Garbage collection, for instance, runs throughout the lifetime of the application's run. This sort of functionality is effectively "outside" the program, it is present but not expressed directly in the program itself. Functions like these are generally implemented in ever-growing runtime systems, libraries that are compiled into programs but not necessarily visible within the code. === Shared libraries and common runtimes === The introduction of shared library systems changed the model of conventional program construction considerably. In the past, library code was copied directly into programs by the "linker" and effectively became part of the program. With dynamic linking the library code (normally) exists in only one place, a vendor-provided file in the system that all applications share. Early systems presented many problems, often in performance terms, and shared libraries were largely isolated to particular languages or platforms, as opposed to the operating system as a whole. Many of these problems were addressed through the 1990s, and by the early 2000s most major platforms had switched to shared libraries as the primary interface to the entire system. Although such systems addressed the problem of providing common code libraries for new applications, these systems generally added their own runtimes as well. This meant that the language, library, and now the entire system, were often tightly linked together. For instance, under OpenStep the entire operating system was, in effect, an Objective-C program. Any programs running on it that wished to use the extensive object suite provided in OpenStep would not only have to be able to call those libraries using Obj-C semantics, but also interact with the Obj-C runtime to provide basic control over the application. In contrast, Microsoft's .NET Framework was designed from the start to be able to support multiple languages, initially C#, C++ and a new version of Visual Basic. To do this, MS isolated the object libraries and the runtime into the Common Language Infrastructure (CLI). Instead of programs compiling directly from the source code to the underlying runtime format, as is the case in most languages, under the CLI model all languages are first compiled to the Common Intermediate Language (CIL), which then calls into the Common Language Runtime (CLR). In theory, any programming language can use the CLI system and use .NET objects. === Bridging === Although platforms like OSX and .NET offer the ability for most programming languages to be adapted to the platform's runtime system, it is also the case that these programming languages often have a target runtime in mind - Objective-C essentially requires the Obj-C runtime, while C# does the same for the CLR. If one wants to use C# code within Obj-C, or vice versa, one has to find a version written to use the other runtime, which often does not exist. A more common version of this problem concerns the use of languages that are platform independent, like Java, which have their own runtimes and libraries. Although it is possible to build a Java compiler that calls the underlying system, like J#, such a system would not also be able to interact with other Java code unless it too was re-compiled. Access to code in Java libraries may be difficult or impossible. The rise of the web browser as a sort of virtual operating system has made this problem more acute. The modern "programming" paradigm under HTML5 includes the JavaScript (JS) language, the Document Object Model as a major library, and the browser itself as a runtime environment. Although it would be possible to build a version of JS that runs on the CLR, but this would largely defeat the purpose of a language designed largely for operating browsers - unless that compiler can interact with the browser directly, there is little purpose in using it. In these cases, and many like it, the need arises for a system that allows the two runtimes to interoperate. This is known as "bridging" the runtimes. == Examples == === Apple === Apple has made considerable use of bridging technologies since the earliest efforts that led to Mac OS X. When NeXT was first purchased by Apple, the plan was to build a new version of OpenStep, then-known as Rhapsody, with an emulator known as a Blue Box that would run "classic" Mac OS programs. This led to considerable push-back from the developer community, and Rhapsody was cancelled. In its place, OS X would implement many of the older Mac OS calls on top of core functionality in OpenStep, providing a path for existing applications to be gracefully migrated forward. To do this, Apple took useful code from the OpenStep platform and re-implemented the core functionality in a pure-C library known as Core Foundation, or CF for short. OpenStep's libraries calling CF underlying code became the Cocoa API, while the new Mac-like C libraries became the Carbon API. As the C and Obj-C sides of the system needed to share data, and the data on the Obj-C side was normally stored in objects (as opposed to base types), conversions to and from CF could be expensive. Apple was not willing to pay this performance penalty, so they implemented a scheme known as "toll-free bridging" to help reduce or eliminate this problem. At the time, Java was becoming a major player in the programming world, and Apple also provided a Java bridging solution that was developed for the WebObjects platform. This was a more classical bridging solution, with direct conversions between Java and OpenStep/CF types being completed in code, where required. Under Carbon, a program using CFStrings was using the same code as a Cocoa application using NSString, and the two could be bridged toll-free. With the Java bridge, CFStrings were instead cast into Java's own String objects, which required more work but made porting essentially invisible. Other developers made widespread use of similar technologies to provide support for other languages, including the "peering" system used to allow Obj-C code to call .NET code under Mono. As the need for these porting solutions waned, both Carbon and the Java Bridge were deprecated and eventually removed from later releases of the system. Java support was migrated to using the Java Native Interface (JNI), a standard from the Java world that allowed Java to interact with C-based code. On OSX, the JNI allowed Obj-C code to be used, with some difficulty. Around 2012, Apple's extensive work on WebKit has led to the introduction of a new bridging technology that allows JavaScript program code to call into the Obj-C/Cocoa runtime, and vice versa. This allows browser automation using Obj-C, or alternately, the automation of Cocoa applications using JavaScript. Originally part of the Safari web browser, in 2013 the code was promoted to be part of the new OSX 10.9. === Microsoft === Although there are some examples of bridging being used in the past, Microsoft's CLI system was intended to support languages on top of the .NET system rather than running under native runtimes and bridging. This led to a number of new languages being implemented in the CLI system, often including either a hash mark (#) or "Iron" in their name. See the List of CLI languages for a more comprehensive set of examples. This concept was seen as an example of MS's embrace, extend and extinguish behaviour, as it produced Java-like languages (C# and J# for instance) that did not work with other Java code or used their libraries. Nevertheless, the "classic" Windows ecosystem included considerable code that would be needed to be used within the .NET world, and for this role MS introduced a well supported bridging system. The system included numerous utilities and language features to ease the use of Windows or Visual Basic code within the .NET system, or vice versa. Microsoft has also introduced a JavaScript bridging technology for Silverlight, the HTML Bridge. The Bridge exposes JS types to .NET code, .NET types to JS code, and manages memory and access safety between them. === Other examples === Similar bridging technologies, often with JavaScript on one side, are common on various platforms. One example is JS bridge for the Android OS written as an example. The term is also sometimes used to describe object-relational mapping systems, which bridge the divide between the SQL database world and modern object programming languages. == References ==
https://en.wikipedia.org/wiki/Bridging_(programming)
Unreal Engine (UE) is a 3D computer graphics game engine developed by Epic Games, first showcased in the 1998 first-person shooter video game Unreal. Initially developed for PC first-person shooters, it has since been used in a variety of genres of games and has been adopted by other industries, most notably the film and television industry. Unreal Engine is written in C++ and features a high degree of portability, supporting a wide range of desktop, mobiles, console, and virtual reality platforms. The latest generation, Unreal Engine 5, was launched in April 2023. Its source code is available on GitHub, and commercial use is granted based on a royalty model, with Epic charging 5% of revenues over US $1 million, which is waived for games published exclusively on the Epic Games Store. Epic has incorporated features in the engine from acquired companies such as Quixel, which is seen as benefiting from Fortnite's revenue. In 2014, Unreal Engine was named the world's "most successful videogame engine" by Guinness World Records. == History == === First generation === Unreal Engine 1 was initially developed in 1995 by Epic Games founder Tim Sweeney for Unreal and used software rendering. It supported Windows, Linux, Mac and Unix. Epic later began to license the Engine to other game studios. === Unreal Engine 2 === Unreal Engine 2 transitioned the engine from software rendering to hardware rendering and brought support for the PlayStation 2, Xbox, and GameCube consoles. The first game using UE2 was released in 2002 and its last update was shipped in 2005. === Unreal Engine 3 === Unreal Engine 3 was one of the first game engines to support multithreading. It used DirectX 9 as its baseline graphics API, simplifying its rendering code. The first games using UE3 were released at the end of 2006. === Unreal Engine 4 === Unreal Engine 4 brought support for physically based materials and the "Blueprints" visual scripting system. The first game using UE4 was released in April 2014. It was the first version of Unreal to be free to download with royalty payments on game revenue. === Unreal Engine 5 === Unreal Engine 5 features Nanite, a virtualized geometry system that allows game developers to use arbitrarily high quality meshes with automatically generated Level of Detail, and Lumen, a dynamic global illumination and reflections system that uses software and hardware ray tracing. It was revealed in May 2020 and officially released in April 2022. === Unreal Engine 6 === Sweeney discussed Unreal Engine 6 on the Lex Fridman podcast in 2025, and indicated that the first preview builds would be available in two to three years. The next version will aim to unify the currently separate development streams used for Fortnite and the broader engine. == Scripting == === UnrealScript === UnrealScript (often abbreviated to UScript) was Unreal Engine's native scripting language used for authoring game code and gameplay events before the release of Unreal Engine 4. The language was designed for simple, high-level game programming. UnrealScript was programmed by Tim Sweeney, who also created an earlier game scripting language, ZZT-OOP. Deus Ex lead programmer Chris Norden described it as "super flexible" but noted its low execution speed. Similar to Java, UnrealScript was object-oriented without multiple inheritance (classes all inherit from a common Object class), and classes were defined in individual files named for the class they define. Unlike Java, UnrealScript did not have object wrappers for primitive types. Interfaces were only supported in Unreal Engine generation 3 and a few Unreal Engine 2 games. UnrealScript supported operator overloading, but not method overloading, except for optional parameters. At the 2012 Game Developers Conference, Epic announced that UnrealScript was being removed from Unreal Engine 4 in favor of C++. Visual scripting would be supported by the Blueprints Visual Scripting system, a replacement for the earlier Kismet visual scripting system. One of the key moments in Unreal Engine 4's development was, we had a series of debates about UnrealScript – the scripting language I'd built that we'd carried through three generations. And what we needed to do to make it competitive in the future. And we kept going through bigger and bigger feature lists of what we needed to do to upgrade it, and who could possibly do the work, and it was getting really, really unwieldy. And there was this massive meeting to try and sort it out, and try to cut things and decide what to keep, and plan and...there was this point where I looked at that and said 'you know, everything you're proposing to add to UnrealScript is already in C++. Why don't we just kill UnrealScript and move to pure C++? You know, maximum performance and maximum debuggability. It gives us all these advantages.' === Verse === Verse is the new scripting language for Unreal Engine, first implemented in Fortnite. Simon Peyton Jones, known for his contributions to the Haskell programming language, joined Epic Games in December 2021 as Engineering Fellow to work on Verse with his long-time colleague Lennart Augustsson and others. Conceived by Sweeney, it was officially presented at Haskell eXchange in December 2022 as an open source functional-logic language for the metaverse. A research paper, titled The Verse Calculus: a Core Calculus for Functional Logic Programming, was also published. The language was eventually launched in March 2023 as part of the release of the Unreal Editor for Fortnite (UEFN) at the Game Developers Conference, with plans to be available to all Unreal Engine users by 2025. == Marketplace == With Unreal Engine 4, Epic opened the Unreal Engine Marketplace in September 2014. The Marketplace is a digital storefront that allows content creators and developers to provide art assets, models, sounds, environments, code snippets, and other features that others could purchase, along with tutorials and other guides. Some content is provided for free by Epic, including previously offered Unreal assets and tutorials. Prior to July 2018, Epic took a 30% share of the sales but due to the success of Unreal and Fortnite Battle Royale, Epic retroactively reduced its take to 12%. == Usage == === Video games === Unreal Engine was originally designed to be used as the underlying technology for video games. The engine is used in a number of high-profile game titles with high graphics capabilities, including Hogwarts Legacy, PUBG: Battlegrounds, Final Fantasy VII Remake, Valorant and Yoshi's Crafted World, in addition to games developed by Epic, including Gears of War and Fortnite. Polish game developer CD Projekt is also planning to use the engine after retiring their in-house REDengine; their first game to use Unreal will be a remake of The Witcher. Usage of Unreal Engine has been steadily increasing since 2012, from an estimated 17% market share to 28% in 2024, compared to Unity's 50%. By sales, Unreal accounts for 31% compared to Unity's 26%, with proprietary engines accounting for a combined 42%, making Unreal the largest engine by units sold. === Film and television === Unreal Engine has found use in filmmaking to create virtual sets that can track with a camera's motion around actors and objects and be rendered in real time to large LED screens and atmospheric lighting systems. This allows for real-time composition of shots, immediate editing of the virtual sets as needed, and the ability to shoot multiple scenes within a short period by just changing the virtual world behind the actors. The overall appearance was recognized to appear more natural than typical chromakey effects. Among the productions to use these technologies were the live action television series The Mandalorian, Westworld and Fallout, and the animated series Zafari and Super Giant Robot Brothers. Jon Favreau and Lucasfilm's Industrial Light & Magic division worked with Epic in developing their StageCraft technology for The Mandalorian, based on a similar approach Favreau had used in The Lion King. Favreau then shared this technology approach with Westworld producers Jonathan Nolan and Lisa Joy. The show had already looked at the use of virtual sets before and had some technology established, but integrated the use of Unreal Engine as with StageCraft for its third season. Orca Studios, a Spanish-based company, has been working with Epic to establish multiple studios for virtual filming similar to the StageCraft approach with Unreal Engine providing the virtual sets, particularly during the COVID-19 pandemic, which restricted travel. In January 2021, Deadline Hollywood announced that Epic was using part of its Epic MegaGrants to back for the first time an animated feature film, Gilgamesh, to be produced fully in Unreal Engine by animation studios Hook Up, DuermeVela and FilmSharks. As part of an extension of its MegaGrants, Epic also funded 45 additional projects since around 2020 for producing feature-length and short films in the Unreal Engine. By October 2022, Epic was working with several different groups at over 300 virtual sets across the world. Unreal Engine was used for motion capture in Lyle, Lyle, Crocodile. === Other uses === Unreal Engine has also been used by non-creative fields due to its availability and feature sets. It has been used as a basis for a virtual reality tool to explore pharmaceutical drug molecules in collaboration with other researchers, as a virtual environment to explore and design new buildings and automobiles, and used for cable news networks to support real-time graphics. Some car companies, most prominently including Rivian, use Unreal Engine in their infotainment systems. In March 2012, Epic Games announced a partnership with Virtual Heroes of Applied Research Associates to launch Unreal Government Network, a program that handles Unreal Engine licenses for government agencies. Several projects originated with this support agreement, including an anaesthesiology training software for U.S. Army physicians, a multiplayer crime scene simulation developed by the FBI Academy, and various applications for the Intelligence Advanced Research Projects Activity with the aim to help intelligence analysts recognize and mitigate cognitive biases that might affect their work. Similarly, the DHS Science and Technology Directorate and the U.S. Army's Training and Doctrine Command and Research Laboratory employed the engine to develop a platform to train first responders titled Enhanced Dynamic Geo-Social Environment (EDGE). == Awards == The engine has received numerous awards: Technology & Engineering Emmy Award from the National Academy of Television Arts and Sciences (NATAS) for "3D Engine Software for the Production of Animation" in 2018 Primetime Engineering Emmy Award from the Television Academy for exceptional developments in broadcast technology in 2020 Annie Award from ASIFA-Hollywood for technical advancement in animation in 2021 Game Developer Magazine Front Line Award for Best Game Engine for 2004, 2005, 2006, 2007, 2009, 2010, 2011, and 2012 Develop Industry Excellence Award for Best Engine for 2009, 2010, 2011, 2013, 2016, 2017, and 2018 Guinness World Record for most successful video game engine == Legal aspects == The state of the Unreal Engine came up in Epic's 2020 legal action against Apple Inc. claiming anticompetitive behavior in Apple's iOS App Store. Epic had uploaded a version of Fortnite that violated Apple's App Store allowances. Apple, in response, removed the Fortnite app and later threatened to terminate Epic's developer accounts which would have prevented Epic from updating the Unreal Engine for iOS and macOS. The court agreed to grant Epic a permanent injunction against Apple to prevent Apple from taking this step, since the court agreed that would impact numerous third-party developers that rely on the Unreal Engine. == See also == Category:Unreal Engine games Procedural generation Make Something Unreal Epic Citadel The Matrix Awakens On-set virtual production Uncanny valley Unity (game engine) List of game engines == References == == Further reading ==
https://en.wikipedia.org/wiki/Unreal_Engine
PL/I (Programming Language One, pronounced and sometimes written PL/1) is a procedural, imperative computer programming language initially developed by IBM. It is designed for scientific, engineering, business and system programming. It has been in continuous use by academic, commercial and industrial organizations since it was introduced in the 1960s. A PL/I American National Standards Institute (ANSI) technical standard, X3.53-1976, was published in 1976. PL/I's main domains are data processing, numerical computation, scientific computing, and system programming. It supports recursion, structured programming, linked data structure handling, fixed-point, floating-point, complex, character string handling, and bit string handling. The language syntax is English-like and suited for describing complex data formats with a wide set of functions available to verify and manipulate them. == Early history == In the 1950s and early 1960s, business and scientific users programmed for different computer hardware using different programming languages. Business users were moving from Autocoders via COMTRAN to COBOL, while scientific users programmed in Fortran, ALGOL, GEORGE, and others. The IBM System/360 (announced in 1964 and delivered in 1966) was designed as a common machine architecture for both groups of users, superseding all existing IBM architectures. Similarly, IBM wanted a single programming language for all users. It hoped that Fortran could be extended to include the features needed by commercial programmers. In October 1963 a committee was formed composed originally of three IBMers from New York and three members of SHARE, the IBM scientific users group, to propose these extensions to Fortran. Given the constraints of Fortran, they were unable to do this and embarked on the design of a new programming language based loosely on ALGOL labeled NPL. This acronym conflicted with that of the UK's National Physical Laboratory and was replaced briefly by MPPL (MultiPurpose Programming Language) and, in 1965, with PL/I (with a Roman numeral "I"). The first definition appeared in April 1964. IBM took NPL as a starting point and completed the design to a level that the first compiler could be written: the NPL definition was incomplete in scope and in detail. Control of the PL/I language was vested initially in the New York Programming Center and later at the IBM UK Laboratory at Hursley. The SHARE and GUIDE user groups were involved in extending the language and had a role in IBM's process for controlling the language through their PL/I Projects. The experience of defining such a large language showed the need for a formal definition of PL/I. A project was set up in 1967 in IBM Laboratory Vienna to make an unambiguous and complete specification. This led in turn to one of the first large scale Formal Methods for development, VDM. Fred Brooks is credited with ensuring PL/I had the CHARACTER data type. The language was first specified in detail in the manual "PL/I Language Specifications. C28-6571", written in New York in 1965, and superseded by "PL/I Language Specifications. GY33-6003", written by Hursley in 1967. IBM continued to develop PL/I in the late sixties and early seventies, publishing it in the GY33-6003 manual. These manuals were used by the Multics group and other early implementers. The first compiler was delivered in 1966. The Standard for PL/I was approved in 1976. == Goals and principles == The goals for PL/I evolved during the early development of the language. Competitiveness with COBOL's record handling and report writing was required. The language's scope of usefulness grew to include system programming and event-driven programming. Additional goals for PL/I were: Performance of compiled code competitive with that of Fortran (but this was not achieved) Extensibility for new hardware and new application areas Improved productivity of the programming process, transferring effort from the programmer to the compiler Machine independence to operate effectively on the main computer hardware and operating systems To achieve these goals, PL/I borrowed ideas from contemporary languages while adding substantial new capabilities and casting it with a distinctive concise and readable syntax. Many principles and capabilities combined to give the language its character and were important in meeting the language's goals: Block structure, with underlying semantics (including recursion), similar to Algol 60. Arguments are passed using call by reference, using dummy variables for values where needed (call by value). A wide range of computational data types, program control data types, and forms of data structure (strong typing). Dynamic extents for arrays and strings with inheritance of extents by procedure parameters. Concise syntax for expressions, declarations, and statements with permitted abbreviations. Suitable for a character set of 60 glyphs and sub-settable to 48. An extensive structure of defaults in statements, options, and declarations to hide some complexities and facilitate extending the language while minimizing keystrokes. Powerful iterative processing with good support for structured programming. There were to be no reserved words (although the function names DATE and TIME initially proved to be impossible to meet this goal). New attributes, statements and statement options could be added to PL/I without invalidating existing programs. Not even IF, THEN, ELSE, and DO were reserved. Orthogonality: each capability to be independent of other capabilities and freely combined with other capabilities wherever meaningful. Each capability to be available in all contexts where meaningful, to exploit it as widely as possible and to avoid "arbitrary restrictions". Orthogonality helps make the language "large". Exception handling capabilities for controlling and intercepting exceptional conditions at run time. Programs divided into separately compilable sections, with extensive compile-time facilities (a.k.a. macros), not part of the standard, for tailoring and combining sections of source code into complete programs. External names to bind separately compiled procedures into a single program. Debugging facilities integrated into the language. == Language summary == The language is designed to provide sufficient facilities to be able to satisfy the needs of all programmers, regardless of what problems the language is being applied to. The summary is extracted from the ANSI PL/I Standard and the ANSI PL/I General-Purpose Subset Standard. A PL/I program consists of a set of procedures, each of which is written as a sequence of statements. The %INCLUDE construct is used to include text from other sources during program translation. All of the statement types are summarized here in groupings which give an overview of the language (the Standard uses this organization). (Features such as multi-tasking and the PL/I preprocessor are not in the Standard but are supported in the PL/I F compiler and some other implementations are discussed in the Language evolution section.) Names may be declared to represent data of the following types, either as single values, or as aggregates in the form of arrays, with a lower-bound and upper-bound per dimension, or structures (comprising nested structure, array and scalar variables): The arithmetic type comprises these attributes: The base, scale, precision and scale factor of the Picture-for-arithmetic type is encoded within the picture-specification. The mode is specified separately, with the picture specification applied to both the real and the imaginary parts. Values are computed by expressions written using a specific set of operations and builtin functions, most of which may be applied to aggregates as well as to single values, together with user-defined procedures which, likewise, may operate on and return aggregate as well as single values. The assignment statement assigns values to one or more variables. There are no reserved words in PL/I. A statement is terminated by a semi-colon. The maximum length of a statement is implementation defined. A comment may appear anywhere in a program where a space is permitted and is preceded by the characters forward slash, asterisk and is terminated by the characters asterisk, forward slash (i.e. /* This is a comment. */). Statements may have a label-prefix introducing an entry name (ENTRY and PROCEDURE statements) or label name, and a condition prefix enabling or disabling a computational condition – e.g., (NOSIZE)). Entry and label names may be single identifiers or identifiers followed by a subscript list of constants (as in L(12,2):A=0;). A sequence of statements becomes a group when preceded by a DO statement and followed by an END statement. Groups may include nested groups and begin blocks. The IF statement specifies a group or a single statement as the THEN part and the ELSE part (see the sample program). The group is the unit of iteration. The begin block (BEGIN; stmt-list END;) may contain declarations for names and internal procedures local to the block. A procedure starts with a PROCEDURE statement and is terminated syntactically by an END statement. The body of a procedure is a sequence of blocks, groups, and statements and contains declarations for names and procedures local to the procedure or EXTERNAL to the procedure. An ON-unit is a single statement or block of statements written to be executed when one or more of these conditions occur: a computational condition, or an Input/Output condition, or one of the conditions: AREA, CONDITION (identifier), ERROR, FINISH A declaration of an identifier may contain one or more of the following attributes (but they need to be mutually consistent): Current compilers from Micro Focus, and particularly that from IBM implement many extensions over the standardized version of the language. The IBM extensions are summarised in the Implementation sub-section for the compiler later. Although there are some extensions common to these compilers the lack of a current standard means that compatibility is not guaranteed. == Standardization == Language standardization began in April 1966 in Europe with ECMA TC10. In 1969 ANSI established a "Composite Language Development Committee", nicknamed "Kludge", later renamed X3J1 PL/I. Standardization became a joint effort of ECMA TC/10 and ANSI X3J1. A subset of the GY33-6003 document was offered to the joint effort by IBM and became the base document for standardization. The major features omitted from the base document were multitasking and the attributes for program optimization (e.g., NORMAL and ABNORMAL). Proposals to change the base document were voted upon by both committees. In the event that the committees disagreed, the chairs, initially Michael Marcotty of General Motors and C.A.R. Hoare representing ICL had to resolve the disagreement. In addition to IBM, Honeywell, CDC, Data General, Digital Equipment Corporation, Prime Computer, Burroughs, RCA, and Univac served on X3J1 along with major users Eastman Kodak, MITRE, Union Carbide, Bell Laboratories, and various government and university representatives. Further development of the language occurred in the standards bodies, with continuing improvements in structured programming and internal consistency, and with the omission of the more obscure or contentious features. As language development neared an end, X3J1/TC10 realized that there were a number of problems with a document written in English text. Discussion of a single item might appear in multiple places which might or might not agree. It was difficult to determine if there were omissions as well as inconsistencies. Consequently, David Beech (IBM), Robert Freiburghouse (Honeywell), Milton Barber (CDC), M. Donald MacLaren (Argonne National Laboratory), Craig Franklin (Data General), Lois Frampton (Digital Equipment Corporation), and editor, D.J. Andrews of IBM undertook to rewrite the entire document, each producing one or more complete chapters. The standard is couched as a formal definition using a "PL/I Machine" to specify the semantics. It was the first programming language standard to be written as a semi-formal definition. A "PL/I General-Purpose Subset" ("Subset-G") standard was issued by ANSI in 1981 and a revision published in 1987. The General Purpose subset was widely adopted as the kernel for PL/I implementations. == Implementations == === IBM PL/I F and D compilers === PL/I was first implemented by IBM, at its Hursley Laboratories in the United Kingdom, as part of the development of System/360. The first production PL/I compiler was the PL/I F compiler for the OS/360 Operating System, built by John Nash's team at Hursley in the UK: the runtime library team was managed by I.M. (Nobby) Clarke. The PL/I F compiler was written entirely in System/360 assembly language. Release 1 shipped in 1966. OS/360 is a real-memory environment and the compiler was designed for systems with as little as 64 kilobytes of real storage – F being 64 kB in S/360 parlance. To fit a large compiler into the 44 kilobytes of memory available on a 64-kilobyte machine, the compiler consists of a control phase and a large number of compiler phases (approaching 100). The phases are brought into memory from disk, one at a time, to handle particular language features and aspects of compilation. Each phase makes a single pass over the partially-compiled program, usually held in memory. Aspects of the language were still being designed as PL/I F was implemented, so some were omitted until later releases. PL/I RECORD I/O was shipped with PL/I F Release 2. The list processing functions – Based Variables, Pointers, Areas and Offsets and LOCATE-mode I/O – were first shipped in Release 4. In a major attempt to speed up PL/I code to compete with Fortran object code, PL/I F Release 5 does substantial program optimization of DO-loops facilitated by the REORDER option on procedures. A version of PL/I F was released on the TSS/360 timesharing operating system for the System/360 Model 67, adapted at the IBM Mohansic Lab. The IBM La Gaude Lab in France developed "Language Conversion Programs" to convert Fortran, Cobol, and Algol programs to the PL/I F level of PL/I. The PL/I D compiler, using 16 kilobytes of memory, was developed by IBM Germany for the DOS/360 low end operating system. It implements a subset of the PL/I language requiring all strings and arrays to have fixed extents, thus simplifying the run-time environment. Reflecting the underlying operating system, it lacks dynamic storage allocation and the controlled storage class. It was shipped within a year of PL/I F. === Multics PL/I and derivatives === Compilers were implemented by several groups in the early 1960s. The Multics project at MIT, one of the first to develop an operating system in a high-level language, used Early PL/I (EPL), a subset dialect of PL/I, as their implementation language in 1964. EPL was developed at Bell Labs and MIT by Douglas McIlroy, Robert Morris, and others. Initially, it was developed using the TMG compiler-compiler. The influential Multics PL/I compiler [sic "PL/1"] was the source of compiler technology used by a number of manufacturers and software groups. EPL was a system programming language and a dialect of PL/I that had some capabilities absent in the original PL/I. The Honeywell PL/I compiler (for Series 60) is an implementation of the full ANSI X3J1 standard. === IBM PL/I optimizing and checkout compilers === The PL/I Optimizer and Checkout compilers produced in Hursley support a common level of PL/I language and aimed to replace the PL/I F compiler. The checkout compiler is a rewrite of PL/I F in BSL, IBM's PL/I-like proprietary implementation language (later PL/S). The performance objectives set for the compilers are shown in an IBM presentation to the BCS. The compilers had to produce identical results – the Checkout Compiler is used to debug programs that would then be submitted to the Optimizer. Given that the compilers had entirely different designs and were handling the full PL/I language this goal was challenging: it was achieved. IBM introduced new attributes and syntax including BUILTIN, case statements (SELECT/WHEN/OTHERWISE), loop controls (ITERATE and LEAVE) and null argument lists to disambiguate, e.g., DATE(). The PL/I optimizing compiler took over from the PL/I F compiler and was IBM's workhorse compiler from the 1970s to the 1990s. Like PL/I F, it is a multiple pass compiler with a 44 kilobyte design point, but it is an entirely new design. Unlike the F compiler, it has to perform compile time evaluation of constant expressions using the run-time library, reducing the maximum memory for a compiler phase to 28 kilobytes. A second-time around design, it succeeded in eliminating the annoyances of PL/I F such as cascading diagnostics. It was written in S/360 Macro Assembler by a team, led by Tony Burbridge, most of whom had worked on PL/I F. Macros were defined to automate common compiler services and to shield the compiler writers from the task of managing real-mode storage, allowing the compiler to be moved easily to other memory models. The gamut of program optimization techniques developed for the contemporary IBM Fortran H compiler were deployed: the Optimizer equaled Fortran execution speeds in the hands of good programmers. Announced with IBM S/370 in 1970, it shipped first for the DOS/360 operating system in August 1971, and shortly afterward for OS/360, and the first virtual memory IBM operating systems OS/VS1, MVS, and VM/CMS. (The developers were unaware that while they were shoehorning the code into 28 kb sections, IBM Poughkeepsie was finally ready to ship virtual memory support in OS/360). It supported the batch programming environments and, under TSO and CMS, it could be run interactively. This compiler went through many versions covering all mainframe operating systems including the operating systems of the Japanese plug-compatible machines (PCMs). The compiler has been superseded by "IBM PL/I for OS/2, AIX, Linux, z/OS" below. The PL/I checkout compiler, (colloquially "The Checker") announced in August 1970 was designed to speed and improve the debugging of PL/I programs. The team was led by Brian Marks. The three-pass design cut the time to compile a program to 25% of that taken by the F Compiler. It can be run from an interactive terminal, converting PL/I programs into an internal format, "H-text". This format is interpreted by the Checkout compiler at run-time, detecting virtually all types of errors. Pointers are represented in 16 bytes, containing the target address and a description of the referenced item, thus permitting "bad" pointer use to be diagnosed. In a conversational environment when an error is detected, control is passed to the user who can inspect any variables, introduce debugging statements and edit the source program. Over time the debugging capability of mainframe programming environments developed most of the functions offered by this compiler and it was withdrawn (in the 1990s?) === DEC PL/I === Perhaps the most commercially successful implementation aside from IBM's was Digital Equipment Corporation's VAX-11 PL/I, later known as VAX PL/I, then DEC PL/I. The implementation is "a strict superset of the ANSI X3.4-1981 PL/I General Purpose Subset and provides most of the features of the new ANSI X3.74-1987 PL/I General Purpose Subset", and was first released in 1980. It originally used a compiler backend named the VAX Code Generator (VCG) created by a team led by Dave Cutler. The front end was designed by Robert Freiburghouse, and was ported to VAX/VMS from Multics. It runs on VMS on VAX and Alpha, and on Tru64. During the 1990s, Digital sold the compiler to UniPrise Systems, who later sold it to a company named Kednos. Kednos marketed the compiler as Kednos PL/I until October 2016 when the company ceased trading. === Teaching subset compilers === In the late 1960s and early 1970s, many US and Canadian universities were establishing time-sharing services on campus and needed conversational compiler/interpreters for use in teaching science, mathematics, engineering, and computer science. Dartmouth was developing BASIC, but PL/I was a popular choice, as it was concise and easy to teach. As the IBM offerings were unsuitable, a number of schools built their own subsets of PL/I and their own interactive support. Examples are: In the 1960s and early 1970s, Allen-Babcock implemented the Remote Users of Shared Hardware (RUSH) time sharing system for an IBM System/360 Model 50 with custom microcode and subsequently implemented IBM's CPS, an interactive time-sharing system for OS/360 aimed at teaching computer science basics, offered a limited subset of the PL/I language in addition to BASIC and a remote job entry facility. PL/C, a dialect for teaching, a compiler developed at Cornell University, had the unusual capability of never failing to compile any program through the use of extensive automatic correction of many syntax errors and by converting any remaining syntax errors to output statements. The language was almost all of PL/I as implemented by IBM. PL/C was a very fast compiler. SL/1 (Student Language/1, Student Language/One or Subset Language/1) was a PL/I subset, initially available late 1960s, that ran interpretively on the IBM 1130; instructional use was its strong point. PLAGO, created at the Polytechnic Institute of Brooklyn, used a simplified subset of the PL/I language and focused on good diagnostic error messages and fast compilation times. The Computer Systems Research Group of the University of Toronto produced the SP/k compilers which supported a sequence of subsets of PL/I called SP/1, SP/2, SP/3, ..., SP/8 for teaching programming. Programs that ran without errors under the SP/k compilers produced the same results under other contemporary PL/I compilers such as IBM's PL/I F compiler, IBM's checkout compiler or Cornell University's PL/C compiler. Other examples are PL0 by P. Grouse at the University of New South Wales, PLUM by Marvin Victor Zelkowitz at the University of Maryland., and PLUTO from the University of Toronto. === IBM PL/I for OS/2, AIX, Linux, z/OS === In a major revamp of PL/I, IBM Santa Teresa in California launched an entirely new compiler in 1992. The initial shipment was for OS/2 and included most ANSI-G features and many new PL/I features. Subsequent releases provided additional platforms (MVS, VM, OS/390, AIX and Windows), but as of 2021, the only supported platforms are z/OS and AIX. IBM continued to add functions to make PL/I fully competitive with other languages (particularly C and C++) in areas where it had been overtaken. The corresponding "IBM Language Environment" supports inter-operation of PL/I programs with Database and Transaction systems, and with programs written in C, C++, and COBOL, the compiler supports all the data types needed for intercommunication with these languages. The PL/I design principles were retained and withstood this major extension, comprising several new data types, new statements and statement options, new exception conditions, and new organisations of program source. The resulting language is a compatible super-set of the PL/I Standard and of the earlier IBM compilers. Major topics added to PL/I were: New attributes for better support of user-defined data types – the DEFINE ALIAS, ORDINAL, and DEFINE STRUCTURE statement to introduce user-defined types, the HANDLE locator data type, the TYPE data type itself, the UNION data type, and built-in functions for manipulating the new types. Additional data types and attributes corresponding to common PC data types (e.g., UNSIGNED, VARYINGZ). Improvements in readability of programs – often rendering implied usages explicit (e.g., BYVALUE attribute for parameters) Additional structured programming constructs. Interrupt handling additions. Compile time preprocessor extended to offer almost all PL/I string handling features and to interface with the Application Development Environment The latest series of PL/I compilers for z/OS, called Enterprise PL/I for z/OS, leverage code generation for the latest z/Architecture processors (z14, z13, zEC12, zBC12, z196, z114) via the use of ARCHLVL parm control passed during compilation, and was the second High level language supported by z/OS Language Environment to do so (XL C/C++ being the first, and Enterprise COBOL v5 the last.) ==== Data types ==== ORDINAL is a new computational data type. The ordinal facilities are like those in Pascal, e.g., DEFINE ORDINAL Colour (red, yellow, green, blue, violet); but in addition the name and internal values are accessible via built-in functions. Built-in functions provide access to an ordinal value's predecessor and successor. The DEFINE-statement (see below) allows additional TYPEs to be declared composed from PL/I's built-in attributes. The HANDLE(data structure) locator data type is similar to the POINTER data type, but strongly typed to bind only to a particular data structure. The => operator is used to select a data structure using a handle. The UNION attribute (equivalent to CELL in early PL/I specifications) permits several scalar variables, arrays, or structures to share the same storage in a unit that occupies the amount of storage needed for the largest alternative. ==== Competitiveness on PC and with C ==== These attributes were added: The string attributes VARYINGZ (for zero-terminated character strings), HEXADEC, WIDECHAR, and GRAPHIC. The optional arithmetic attributes UNSIGNED and SIGNED, BIGENDIAN and LITTLEENDIAN. UNSIGNED necessitated the UPTHRU and DOWNTHRU option on iterative groups enabling a counter-controlled loop to be executed without exceeding the limit value (also essential for ORDINALs and good for documenting loops). The DATE(pattern) attribute for controlling date representations and additions to bring time and date to best current practice. New functions for manipulating dates include – DAYS and DAYSTODATE for converting between dates and number of days, and a general DATETIME function for changing date formats. New string-handling functions were added – to centre text, to edit using a picture format, and to trim blanks or selected characters from the head or tail of text, VERIFYR to VERIFY from the right. and SEARCH and TALLY functions. Compound assignment operators a la C e.g., +=, &=, -=, ||= were added. A+=1 is equivalent to A=A+1. Additional parameter descriptors and attributes were added for omitted arguments and variable length argument lists. ==== Program readability – making intentions explicit ==== The VALUE attribute declares an identifier as a constant (derived from a specific literal value or restricted expression). Parameters can have the BYADDR (pass by address) or BYVALUE (pass by value) attributes. The ASSIGNABLE and NONASSIGNABLE attributes prevent unintended assignments. DO FOREVER; obviates the need for the contrived construct DO WHILE ( '1'B );. The DEFINE-statement introduces user-specified names (e.g., INTEGER) for combinations of built-in attributes (e.g., FIXED BINARY(31,0)). Thus DEFINE ALIAS INTEGER FIXED BINARY(31.0) creates the TYPE name INTEGER as an alias for the set of built-in attributes FIXED BINARY(31.0). DEFINE STRUCTURE applies to structures and their members; it provides a TYPE name for a set of structure attributes and corresponding substructure member declarations for use in a structure declaration (a generalisation of the LIKE attribute). ==== Structured programming additions ==== A LEAVE statement to exit a loop, and an ITERATE to continue with the next iteration of a loop. UPTHRU and DOWNTHRU options on iterative groups. The package construct consisting of a set of procedures and declarations for use as a unit. Variables declared outside of the procedures are local to the package, and can use STATIC, BASED or CONTROLLED storage. Procedure names used in the package also are local, but can be made external by means of the EXPORTS option of the PACKAGE-statement. ==== Interrupt handling ==== The RESIGNAL-statement executed in an ON-unit terminates execution of the ON-unit, and raises the condition again in the procedure that called the current one (thus passing control to the corresponding ON-unit for that procedure). The INVALIDOP condition handles invalid operation codes detected by the PC processor, as well as illegal arithmetic operations such as subtraction of two infinite values. The ANYCONDITION condition is provided to intercept conditions for which no specific ON-unit has been provided in the current procedure. The STORAGE condition is raised when an ALLOCATE statement is unable to obtain sufficient storage. === Other mainframe and minicomputer compilers === A number of vendors produced compilers to compete with IBM PL/I F or Optimizing compiler on mainframes and minicomputers in the 1970s. In the 1980s the target was usually the emerging ANSI-G subset. In 1974 Burroughs Corporation announced PL/I for the B6700 and B7700. UNIVAC released a UNIVAC PL/I, and in the 1970s also used a variant of PL/I, PL/I PLUS, for system programming. From 1978 Data General provided PL/I on its Eclipse and Eclipse MV platforms running the AOS, AOS/VS & AOS/VS II operating systems. A number of operating system utility programs were written in the language. Paul Abrahams of NYU's Courant Institute of Mathematical Sciences wrote CIMS PL/I in 1972 in PL/I, bootstrapping via PL/I F. It supported "about 70%" of PL/I compiling to the CDC 6600 CDC delivered an optimizing subset PL/I compiler for Cyber 70, 170 and 6000 series. Fujitsu delivered a PL/I compiler equivalent to the PL/I Optimizer. Stratus Technologies PL/I is an ANSI G implementation for the VOS operating system. IBM Series/1 PL/I is an extended subset of ANSI Programming Language PL/I (ANSI X3.53-1976) for the IBM Series/1 Realtime Programming System. === PL/I compilers for Microsoft .NET === In 2011, Raincode designed a full legacy compiler for the Microsoft .NET and .NET Core platforms, named The Raincode PL/I compiler. === PL/I compilers for personal computers and Unix === In the 1970s and 1980s Digital Research sold a PL/I compiler for CP/M (PL/I-80), CP/M-86 (PL/I-86) and Personal Computers with DOS. It was based on Subset G of PL/I and was written in PL/M. Micro Focus implemented Open PL/I for Windows and UNIX/Linux systems, which they acquired from Liant. IBM delivered PL/I for OS/2 in 1994, and PL/I for AIX in 1995. Iron Spring PL/I for OS/2 and later Linux was introduced in 2007. GCC (pl1gcc) front end; the project's last release was in September 2007. == PL/I dialects == PL/S, a dialect of PL/I, initially called BSL was developed in the late 1960s and became the system programming language for IBM mainframes. Almost all IBM mainframe system software in the 1970s and 1980s was written in PL/S. It differed from PL/I in that there were no data type conversions, no run-time environment, structures were mapped differently, and assignment was a byte by byte copy. All strings and arrays had fixed extents, or used the REFER option. PL/S was succeeded by PL/AS, and then by PL/X, which is the language currently used for internal work on current operating systems, OS/390 and now z/OS. It is also used for some z/VSE and z/VM components. IBM Db2 for z/OS is also written in PL/X. PL/C, is an instructional dialect of the PL/I computer programming language, developed at Cornell University in the 1970s. Two dialects of PL/I named PL/MP (Machine Product) and PL/MI (Machine Interface) were used by IBM in the system software of the System/38 and AS/400 platforms. PL/MP was used to implement the so-called Vertical Microcode of these platforms, and targeted the IMPI instruction set. PL/MI targets the Machine Interface of those platforms, and is used in the System/38 Control Program Facility, and the XPF layer of OS/400. The PL/MP code was mostly replaced with C++ when OS/400 was ported to the IBM RS64 processor family, although some was retained and retargeted for the PowerPC/Power ISA architecture. The PL/MI code was not replaced, and remains in use in IBM i. PL.8, so-called because it was about 80% of PL/I, was originally developed by IBM Research in the 1970s for the IBM 801 architecture. It later gained support for the Motorola 68000 and System/370 architectures. It continues to be used for several IBM internal systems development tasks (e.g., millicode and firmware for z/Architecture systems) and has been re-engineered to use a 64-bit gcc-based backend. Honeywell, Inc. developed PL-6 for use in creating the CP-6 operating system. Prime Computer used two different PL/I dialects as system programming languages for the PRIMOS operating system: PL/P, starting from version 18, and then SP/L, starting from version 19. XPL is a dialect of PL/I used to write other compilers using the XPL compiler techniques. XPL added a heap string datatype to its small subset of PL/I. HAL/S is a real-time aerospace programming language, best known for its use in the Space Shuttle program. It was designed by Intermetrics in the 1970s for NASA. HAL/S was implemented in XPL. IBM and various subcontractors also developed another PL/I variant in the early 1970s to support signal processing for the Navy called SPL/I. SabreTalk, a real-time dialect of PL/I used to program the Sabre airline reservation system. Apple, a PL/I dialect developed by General Motors Research Laboratories for their Control Data Corporation STAR-100 supercomputer, used extensively for graphic design. == Usage == PL/I implementations were developed for mainframes from the late 1960s, mini computers in the 1970s, and personal computers in the 1980s and 1990s. Although its main use has been on mainframes, there are PL/I versions for DOS, Microsoft Windows, OS/2, AIX, OpenVMS, and Unix. It has been widely used in business data processing and for system use for writing operating systems on certain platforms. Very complex and powerful systems have been built with PL/I: The SAS System was initially written in PL/I; the SAS data step is still modeled on PL/I syntax. The pioneering online airline reservation system Sabre was originally written for the IBM 7090 in assembler. The S/360 version was largely written using SabreTalk, a purpose-built subset PL/I compiler for a dedicated control program. The Multics operating system was largely written in PL/I. PL/I was used to write an executable formal definition to interpret IBM's System Network Architecture. Some components of the OpenVMS operating system were originally written in PL/I, but were later rewritten in C during the port of VMS to the IA-64 architecture. PL/I did not fulfill its supporters' hopes that it would displace Fortran and COBOL and become the major player on mainframes. It remained a minority but significant player. There cannot be a definitive explanation for this, but some trends in the 1970s and 1980s militated against its success by progressively reducing the territory on which PL/I enjoyed a competitive advantage. First, the nature of the mainframe software environment changed. Application subsystems for database and transaction processing (CICS and IMS and Oracle on System 370) and application generators became the focus of mainframe users' application development. Significant parts of the language became irrelevant because of the need to use the corresponding native features of the subsystems (such as tasking and much of input/output). Fortran was not used in these application areas, confining PL/I to COBOL's territory; most users stayed with COBOL. But as the PC became the dominant environment for program development, Fortran, COBOL and PL/I all became minority languages overtaken by C++, Java and the like. Second, PL/I was overtaken in the system programming field. The IBM system programming community was not ready to use PL/I; instead, IBM developed and adopted a proprietary dialect of PL/I for system programming. – PL/S. With the success of PL/S inside IBM, and of C outside IBM, the unique PL/I strengths for system programming became less valuable. Third, the development environments grew capabilities for interactive software development that, again, made the unique PL/I interactive and debugging strengths less valuable. Fourth, features such as structured programming, character string operations, and object orientation were added to COBOL and Fortran, which further reduced PL/I's relative advantages. On mainframes there were substantial business issues at stake too. IBM's hardware competitors had little to gain and much to lose from success of PL/I. Compiler development was expensive, and the IBM compiler groups had an in-built competitive advantage. Many IBM users wished to avoid being locked into proprietary solutions. With no early support for PL/I by other vendors it was best to avoid PL/I. == Evolution of the PL/I language == This article uses the PL/I standard as the reference point for language features. But a number of features of significance in the early implementations were not in the Standard; and some were offered by non-IBM compilers. And the de facto language continued to grow after the standard, ultimately driven by developments on the Personal Computer. === Significant features omitted from the standard === ==== Multithreading ==== Multithreading, under the name "multitasking", was implemented by PL/I F, the PL/I Checkout and Optimizing compilers, and the newer AIX and Z/OS compilers. It comprised the data types EVENT and TASK, the TASK-option on the CALL-statement (Fork), the WAIT-statement (Join), the DELAY(delay-time), EVENT-options on the record I/O statements and the UNLOCK statement to unlock locked records on EXCLUSIVE files. Event data identify a particular event and indicate whether it is complete ('1'B) or incomplete ('0'B): task data items identify a particular task (or process) and indicate its priority relative to other tasks. ==== Preprocessor ==== The first IBM Compile time preprocessor was built by the IBM Boston Advanced Programming Center located in Cambridge, Mass, and shipped with the PL/I F compiler. The %INCLUDE statement was in the Standard, but the rest of the features were not. The DEC and Kednos PL/I compilers implemented much the same set of features as IBM, with some additions of their own. IBM has continued to add preprocessor features to its compilers. The preprocessor treats the written source program as a sequence of tokens, copying them to an output source file or acting on them. When a % token is encountered the following compile time statement is executed: when an identifier token is encountered and the identifier has been DECLAREd, ACTIVATEd, and assigned a compile time value, the identifier is replaced by this value. Tokens are added to the output stream if they do not require action (e.g., +), as are the values of ACTIVATEd compile time expressions. Thus a compile time variable PI could be declared, activated, and assigned using %PI='3.14159265'. Subsequent occurrences of PI would be replaced by 3.14159265. The data type supported are FIXED DECIMAL integers and CHARACTER strings of varying length with no maximum length. The structure statements are: %[label_list:]DO iteration: statements; %[label_list:]END; %procedure_name: PROCEDURE (parameter list) RETURNS (type); statements...; %[label_list:]END; %[label_list:]IF...%THEN...%ELSE.. and the simple statements, which also may have a [label_list:] %ACTIVATE(identifier_list) and %DEACTIVATE assignment statement %DECLARE identifier_attribute_list %GO TO label %INCLUDE null statement The feature allowed programmers to use identifiers for constants – e.g., product part numbers or mathematical constants – and was superseded in the standard by named constants for computational data. Conditional compiling and iterative generation of source code, possible with compile-time facilities, was not supported by the standard. Several manufacturers implemented these facilities. ==== Structured programming additions ==== Structured programming additions were made to PL/I during standardization but were not accepted into the standard. These features were the LEAVE-statement to exit from an iterative DO, the UNTIL-option and REPEAT-option added to DO, and a case statement of the general form: SELECT (expression) {WHEN (expression) group}... OTHERWISE group These features were all included in IBM's PL/I Checkout and Optimizing compilers and in DEC PL/I. ==== Debug facilities ==== PL/I F had offered some debug facilities that were not put forward for the standard but were implemented by others – notably the CHECK(variable-list) condition prefix, CHECK on-condition and the SNAP option. The IBM Optimizing and Checkout compilers added additional features appropriate to the conversational mainframe programming environment (e.g., an ATTENTION condition). === Significant features developed since the standard === Several attempts had been made to design a structure member type that could have one of several datatypes (CELL in early IBM). With the growth of classes in programming theory, approaches to this became possible on a PL/I base – UNION, TYPE etc. have been added by several compilers. PL/I had been conceived in a single-byte character world. With support for Japanese and Chinese language becoming essential, and the developments on International Code Pages, the character string concept was expanded to accommodate wide non-ASCII/EBCDIC strings. Time and date handling were overhauled to deal with the millennium problem, with the introduction of the DATETIME function that returned the date and time in one of about 35 different formats. Several other date functions deal with conversions to and from days and seconds. == Criticisms == === Implementation issues === Though the language is easy to learn and use, implementing a PL/I compiler is difficult and time-consuming. A language as large as PL/I needed subsets that most vendors could produce and most users master. This was not resolved until "ANSI G" was published. The compile time facilities, unique to PL/I, took added implementation effort and additional compiler passes. A PL/I compiler was two to four times as large as comparable Fortran or COBOL compilers, and also that much slower—supposedly offset by gains in programmer productivity. This was anticipated in IBM before the first compilers were written. Some argue that PL/I is unusually hard to parse. The PL/I keywords are not reserved so programmers can use them as variable or procedure names in programs. Because the original PL/I(F) compiler attempts auto-correction when it encounters a keyword used in an incorrect context, it often assumes it is a variable name. This leads to "cascading diagnostics", a problem solved by later compilers. The effort needed to produce good object code was perhaps underestimated during the initial design of the language. Program optimization (needed to compete with the excellent program optimization carried out by available Fortran compilers) is unusually complex owing to side effects and pervasive problems with aliasing of variables. Unpredictable modification can occur asynchronously in exception handlers, which may be provided by "ON statements" in (unseen) callers. Together, these make it difficult to reliably predict when a program's variables might be modified at runtime. In typical use, however, user-written error handlers (the ON-unit) often do not make assignments to variables. In spite of the aforementioned difficulties, IBM produced the PL/I Optimizing Compiler in 1971. PL/I contains many rarely used features, such as multitasking support (an IBM extension to the language) which add cost and complexity to the compiler, and its co-processing facilities require a multi-programming environment with support for non-blocking multiple threads for processes by the operating system. Compiler writers were free to select whether to implement these features. An undeclared variable is, by default, declared by first occurrence—thus misspelling might lead to unpredictable results. This "implicit declaration" is no different from FORTRAN programs. For PL/I(F), however, an attribute listing enables the programmer to detect any misspelled or undeclared variable. === Programmer issues === Many programmers were slow to move from COBOL or Fortran due to a perceived complexity of the language and immaturity of the PL/I F compiler. Programmers were sharply divided into scientific programmers (who used Fortran) and business programmers (who used COBOL), with significant tension and even dislike between the groups. PL/I syntax borrowed from both COBOL and Fortran syntax. So instead of noticing features that would make their job easier, Fortran programmers of the time noticed COBOL syntax and had the opinion that it was a business language, while COBOL programmers noticed Fortran syntax and looked upon it as a scientific language. Both COBOL and Fortran programmers viewed it as a "bigger" version of their own language, and both were somewhat intimidated by the language and disinclined to adopt it. Another factor was pseudo-similarities to COBOL, Fortran, and ALGOL. These were PL/I elements that looked similar to one of those languages, but worked differently in PL/I. Such frustrations left many experienced programmers with a jaundiced view of PL/I, and often an active dislike for the language. An early UNIX fortune file contained the following tongue-in-cheek description of the language: Speaking as someone who has delved into the intricacies of PL/I, I am sure that only Real Men could have written such a machine-hogging, cycle-grabbing, all-encompassing monster. Allocate an array and free the middle third? Sure! Why not? Multiply a character string times a bit string and assign the result to a float decimal? Go ahead! Free a controlled variable procedure parameter and reallocate it before passing it back? Overlay three different types of variable on the same memory location? Anything you say! Write a recursive macro? Well, no, but Real Men use rescan. How could a language so obviously designed and written by Real Men not be intended for Real Man use? On the positive side, full support for pointers to all data types (including pointers to structures), recursion, multitasking, string handling, and extensive built-in functions meant PL/I was indeed quite a leap forward compared to the programming languages of its time. However, these were not enough to persuade a majority of programmers or shops to switch to PL/I. The PL/I F compiler's compile time preprocessor was unusual (outside the Lisp world) in using its target language's syntax and semantics (e.g. as compared to the C preprocessor's "#" directives). == Special topics in PL/I == === Storage classes === PL/I provides several 'storage classes' to indicate how the lifetime of variables' storage is to be managed – STATIC, AUTOMATIC, CONTROLLED, and BASED, and AREA. STATIC data is allocated and initialized at load-time, as is done in COBOL "working-storage" and early Fortran. This is the default for EXTERNAL variables (similar to C “extern” or Fortran “named common"), AUTOMATIC is PL/I's default storage class for INTERNAL variables, similar to that of other block-structured languages influenced by ALGOL, like the "auto" storage class in the C language, the default storage allocation in Pascal, and "local-storage" in IBM COBOL. Storage for AUTOMATIC variables is allocated upon entry into the procedure, BEGIN-block, or ON-unit in which they are declared. The compiler and runtime system allocate memory for a stack frame to contain them and other housekeeping information. If a variable is declared with an INITIAL-attribute, code to set it to an initial value is executed at this time. Care is required to manage the use of initialization properly. Large amounts of code can be executed to initialize variables every time a scope is entered, especially if the variable is an array or structure. Storage for AUTOMATIC variables is freed at block exit. STATIC, CONTROLLED, or BASED variables are used to retain variables' contents between invocations of a procedure or block. CONTROLLED storage is managed using a stack, but the pushing and popping of allocations on the stack is managed by the programmer, using ALLOCATE and FREE statements. Storage for BASED variables is also managed using ALLOCATE/FREE, but instead of a stack these allocations have independent lifetimes and are addressed through OFFSET or POINTER variables. BASED variables can also be used to address arbitrary storage areas by setting the associated POINTER variable, for example following a linked list. The AREA attribute is used to declare programmer-defined heaps. Data can be allocated and freed within a specific area, and the area can be deleted, read, and written as a unit.: pp.235–274  === Storage type sharing === There are several ways of accessing allocated storage through different data declarations. Some of these are well defined and safe, some can be used safely with careful programming, and some are inherently unsafe or machine dependent.: pp.262–267, 178–180  Passing a variable as an argument to a parameter by reference allows the argument's allocated storage to be referenced using the parameter. The DEFINED attribute (e.g., DCL A(10,10), B(2:9,2:9) DEFINED A) allows part or all of a variable's storage to be used with a different, but consistent, declaration. The language definition includes a CELL attribute (later renamed UNION) to allow different definitions of data to share the same storage. This was not supported by many early IBM compilers. These usages are safe and machine independent. Record I/O and list processing produce situations where the programmer needs to fit a declaration to the storage of the next record or item, before knowing what type of data structure it has. Based variables and pointers are key to such programs. The data structures must be designed appropriately, typically using fields in a data structure to encode information about its type and size. The fields can be held in the preceding structure or, with some constraints, in the current one. Where the encoding is in the preceding structure, the program needs to allocate a based variable with a declaration that matches the current item (using expressions for extents where needed). Where the type and size information are to be kept in the current structure ("self defining structures") the type-defining fields must be ahead of the type dependent items and in the same place in every version of the data structure. The REFER-option is used for self-defining extents (e.g., string lengths as in DCL 1 A BASED, 2 N BINARY, 2 B CHAR(LENGTH REFER A.N.), etc – where LENGTH is used to allocate instances of the data structure. For self-defining structures, any typing and REFERed fields are placed ahead of the "real" data. If the records in a data set, or the items in a list of data structures, are organised this way they can be handled safely in a machine independent way. PL/I implementations do not (except for the PL/I Checkout compiler) keep track of the data structure used when storage is first allocated. Any BASED declaration can be used with a pointer into the storage to access the storage – inherently unsafe and machine dependent. However, this usage has become important for "pointer arithmetic" (typically adding a certain amount to a known address). This has been a contentious subject in computer science. In addition to the problem of wild references and buffer overruns, issues arise due to the alignment and length for data types used with particular machines and compilers. Many cases where pointer arithmetic might be needed involve finding a pointer to an element inside a larger data structure. The ADDR function computes such pointers, safely and machine independently. Pointer arithmetic may be accomplished by aliasing a binary variable with a pointer as in DCL P POINTER, N FIXED BINARY(31) BASED(ADDR(P)); N=N+255; It relies on pointers being the same length as FIXED BINARY(31) integers and aligned on the same boundaries. With the prevalence of C and its free and easy attitude to pointer arithmetic, recent IBM PL/I compilers allow pointers to be used with the addition and subtraction operators to giving the simplest syntax (but compiler options can disallow these practices where safety and machine independence are paramount). === ON-units and exception handling === When PL/I was designed, programs only ran in batch mode, with no possible intervention from the programmer at a terminal. An exceptional condition such as division by zero would abort the program yielding only a hexadecimal core dump. PL/I exception handling, via ON-units, allowed the program to stay in control in the face of hardware or operating system exceptions and to recover debugging information before closing down more gracefully. As a program became properly debugged, most of the exception handling could be removed or disabled: this level of control became less important when conversational execution became commonplace. Computational exception handling is enabled and disabled by condition prefixes on statements, blocks (including ON-units) and procedures. – e.g., (SIZE, NOSUBSCRIPTRANGE): A(I)=B(I)*C; . Operating system exceptions for Input/Output and storage management are always enabled. The ON-unit is a single statement or BEGIN-block introduced by an ON-statement. Executing the ON statement enables the condition specified, e.g., ON ZERODIVIDE ON-unit. When the exception for this condition occurs and the condition is enabled, the ON-unit for the condition is executed. ON-units are inherited down the call chain. When a block, procedure or ON-unit is activated, the ON-units established by the invoking activation are inherited by the new activation. They may be over-ridden by another ON-statement and can be reestablished by the REVERT-statement. The exception can be simulated using the SIGNAL-statement – e.g., to help debug the exception handlers. The dynamic inheritance principle for ON-units allows a routine to handle the exceptions occurring within the subroutines it uses. If no ON-unit is in effect when a condition is raised a standard system action is taken (often this is to raise the ERROR condition). The system action can be reestablished using the SYSTEM option of the ON-statement. With some conditions it is possible to complete executing an ON-unit and return to the point of interrupt (e.g., the STRINGRANGE, UNDERFLOW, CONVERSION, OVERFLOW, AREA, and FILE conditions) and resume normal execution. With other conditions such as (SUBSCRIPTRANGE), the ERROR condition is raised when this is attempted. An ON-unit may be terminated with a GO TO preventing a return to the point of interrupt, but permitting the program to continue execution elsewhere as determined by the programmer. An ON-unit needs to be designed to deal with exceptions that occur in the ON-unit itself. The ON ERROR SYSTEM; statement allows a nested error trap; if an error occurs within an ON-unit, control might pass to the operating system where a system dump might be produced, or, for some computational conditions, continue execution (as mentioned above). The PL/I RECORD I/O statements have relatively simple syntax as they do not offer options for the many situations from end-of-file to record transmission errors that can occur when a record is read or written. Instead, these complexities are handled in the ON-units for the various file conditions. The same approach was adopted for AREA sub-allocation and the AREA condition. The existence of exception handling ON-units can have an effect on optimization, because variables can be inspected or altered in ON-units. Values of variables that might otherwise be kept in registers between statements, may need to be returned to storage between statements. This is discussed in the section on Implementation Issues above.: pp.249–376  === GO TO with a non-fixed target === PL/I has counterparts for COBOL and FORTRAN's specialized GO TO statements. Syntax for both COBOL and FORTRAN exist for coding two special two types of GO TO, each of which has a target that is not always the same. ALTER (COBOL), ASSIGN (FORTRAN): ALTER paragraph_name_xxx TO PROCEED TO para_name_zzz (“altered go to”). There are other/helpful restrictions on these, especially "in programs ... RECURSIVE attribute, in methods, or .. THREAD option." ASSIGN 1860 TO IGOTTAGO (“assigned go to”)GO TO IGOTTAGO One enhancement, which adds built-in documentation, is GO TO IGOTTAGO (1860, 1914, 1939) (which restricts the variable's value to "one of the labels in the list.") GO TO ... based on a variable's subscript-like value. GO TO (1914, 1939, 2140), MYCHOICE (“computed go to”) GO TO para_One para_Two para_Three DEPENDING ON IDECIDE (“go to depending on”). PL/I has statement label variables (with the LABEL attribute), which can store the value of a statement label, and later be used in a GOTO statement.: 54 : 23  LABL1: .... . . LABL2: ... . . . MY_DEST = LABL1; . GO TO MY_DEST; The programmer can also create an array of static label constants by subscripting the statement labels. GO TO HERE(LUCKY_NUMBER); /* minus 1, zero, or ... */ HERE(-1): PUT LIST ("I O U"); GO TO Lottery; HERE(0): PUT LIST ("No Cash"); GO TO Lottery; HERE(1): PUT LIST ("Dollar Bill"); GO TO Lottery; HERE(2): PUT LIST ("TWO DOLLARS"); GO TO Lottery; Statement label variables can be passed to called procedures, and used to return to a different statement in the calling routine. == Sample programs == === Hello world program === === Search for a string === == See also == List of programming languages Timeline of programming languages == Notes == == References == === Textbooks === Neuhold, E.J.; Lawson, H.W. (1971). The PL/I Machine: An Introduction to Programming. Addison-wesley. ISBN 978-0-2010-5275-6. Barnes, R.A. (1979). PL/I for Programmers. North-Holland. Hughes, Joan K. (1973). PL/I Programming (1st ed.). Wiley. ISBN 0-471-42032-8. Hughes, Joan K. (1986). PL/I Structured Programming (3rd ed.). Wiley. ISBN 0-471-83746-6. Groner, G.F. (1971). PL/I Programming in Technological Applications. Books on Demand, Ann Arbor, MI. Anderson, M.E. (1973). PL/I for Programmers. Prentice-Hall. Stoutemyer, D.R. (1971). PL/I Programming for Engineering & Science. Prentice-Hall. Ziegler, R.R. & C. (1986). PL/I: Structured Programming and Problem Solving (1st ed.). West. ISBN 978-0-314-93915-9. Sturm, E. (2009). The New PL/I ... for PC, Workstation and Mainframe. Vieweg-Teubner, Wiesbaden, Germany. ISBN 978-3-8348-0726-7. Vowels, R.A. (1997). Introduction to PL/I, Algorithms, and Structured Programming (3rd ed.). R.A. Vowels. ISBN 978-0-9596384-9-3. Abrahams, Paul (1979). The PL/I Programming Language (PDF). Courant Mathematics and Computing Laboratory, New York University. === Standards === ANSI X3.53-1976 (R1998) Information Systems - Programming Language - PL/I ANSI ANSI X3.74-1981 (R1998) Information Systems - Programming Language - PL/I General-Purpose Subset ANSI ANSI X3.74-1987 (R1998) Information Systems - Programming Language - PL/I General-Purpose Subset ECMA 50 Programming Language PL/I, 1st edition, December 1976 ISO 6160:1979 Programming languages—PL/I ISO/IEC 6522:1992 Information technology: Programming languages: PL/I general purpose subset === Reference manuals === Burroughs Corporation, "B 6700 / B 7700 PL/I Language Reference", 5001530. Detroit, 1977. CDC. R. A. Vowels, "PL/I for CDC Cyber". Optimizing compiler for the CDC Cyber 70 series. Digital Equipment Corporation, "decsystem10 Conversational Programming Language User's Manual", DEC-10-LCPUA-A-D. Maynard, 1975. Fujitsu Ltd, "Facom OS IV PL/I Reference Manual", 70SP5402E-1,1974. 579 pages. PL/I F subset. Honeywell, Inc., "Multics PL/I Language Specification", AG94-02. 1981. IBM, IBM Operating System/360 PL/I: Language Specifications, C28-6571. 1965. IBM, OS PL/I Checkout and Optimizing Compilers: Language Reference Manual, GC33-0009. 1976. IBM, IBM, "NPL Technical Report", December 1964. IBM, Enterprise PL/I for z/OS Version 4 Release 1 Language Reference Manual Archived 2020-07-28 at the Wayback Machine, SC14-7285-00. 2010. IBM, OS/2 PL/I Version 2: Programming: Language Reference, 3rd Ed., Form SC26-4308, San Jose. 1994. Kednos PL/I for OpenVMS Systems. Reference Manual Archived 2004-03-04 at the Wayback Machine, AA-H952E-TM. Nov 2003. Liant Software Corporation (1994), Open PL/I Language Reference Manual, Rev. Ed., Framingham (Mass.). Nixdorf Computer, "Terminalsystem 8820 Systemtechnischer Teil PL/I-Subset",05001.17.8.93-01, 1976. Ing. C. Olivetti, "Mini PL/I Reference Manual", 1975, No. 3970530 V Q1 Corporation, "The Q1/LMC Systems Software Manual", Farmingdale, 1978. == External links == IBM PL/I Compilers for z/OS, AIX, MVS, VM and VSE Iron Spring Software, PL/I for Linux and OS/2 Micro Focus' Mainframe PL/I Migration Solution OS PL/I V2R3 grammar Version 0.1 Pliedit, PL/I editor for Eclipse Power vs. Adventure - PL/I and C, a side-by-side comparison of PL/I and C. Softpanorama PL/1 page] [sic "PL/1"] The PL/I Language PL1GCC project in SourceForge PL/I software to print signs, source code in book form, by David Sligar (1977), for IBM PL/I F compiler. PLI-2000 on GitHub, Open-source Windows NT PL/I compiler
https://en.wikipedia.org/wiki/PL/I
Swift is a high-level general-purpose, multi-paradigm, compiled programming language created by Chris Lattner in 2010 for Apple Inc. and maintained by the open-source community. Swift compiles to machine code and uses an LLVM-based compiler. Swift was first released in June 2014 and the Swift toolchain has shipped in Xcode since Xcode version 6, released in September 2014. Apple intended Swift to support many core concepts associated with Objective-C, notably dynamic dispatch, widespread late binding, extensible programming, and similar features, but in a "safer" way, making it easier to catch software bugs; Swift has features addressing some common programming errors like null pointer dereferencing and provides syntactic sugar to help avoid the pyramid of doom. Swift supports the concept of protocol extensibility, an extensibility system that can be applied to types, structs and classes, which Apple promotes as a real change in programming paradigms they term "protocol-oriented programming" (similar to traits and type classes). Swift was introduced at Apple's 2014 Worldwide Developers Conference (WWDC). It underwent an upgrade to version 1.2 during 2014 and a major upgrade to Swift 2 at WWDC 2015. It was initially a proprietary language, but version 2.2 was made open-source software under the Apache License 2.0 on December 3, 2015, for Apple's platforms and Linux. == History == Development of Swift started in July 2010 by Chris Lattner, with the eventual collaboration of many other programmers at Apple. Swift was motivated by the need for a replacement for Apple's earlier programming language Objective-C, which had been largely unchanged since the early 1980s and lacked modern language features. Swift took language ideas "from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list". On June 2, 2014, the Apple Worldwide Developers Conference (WWDC) application became the first publicly released app written with Swift. A beta version of the programming language was released to registered Apple developers at the conference, but the company did not promise that the final version of Swift would be source code compatible with the test version. Apple planned to make source code converters available if needed for the full release. The Swift Programming Language, a free 500-page manual, was also released at WWDC, and is available on the Apple Books Store and the official website. Swift reached the 1.0 milestone on September 9, 2014, with the Gold Master of Xcode 6.0 for iOS. Swift 1.1 was released on October 22, 2014, alongside the launch of Xcode 6.1. Swift 1.2 was released on April 8, 2015, along with Xcode 6.3. Swift 2.0 was announced at WWDC 2015, and was made available for publishing apps in the App Store on September 21, 2015. Swift 3.0 was released on September 13, 2016. Through version 3.0, the syntax of Swift went through significant evolution, with the core team making source stability a focus in later versions. Swift 4.0, released on September 19, 2017, introduced several changes to some built-in classes and structures. Code written with previous versions of Swift can be updated using the migration functionality built into Xcode. Swift 4.1 was released on March 29, 2018. In the first quarter of 2018, Swift surpassed Objective-C in measured popularity. Swift 5, released in March 2019, introduced a stable binary interface on Apple platforms, allowing the Swift runtime to be incorporated into Apple operating systems. It is source compatible with Swift 4. Swift 5.1 was officially released in September 2019. Swift 5.1 builds on the previous version of Swift 5 by extending the stable features of the language to compile-time with the introduction of module stability. The introduction of module stability makes it possible to create and share binary frameworks that will work with future releases of Swift. Swift 5.5, officially announced by Apple at the 2021 WWDC, significantly expands language support for concurrency and asynchronous code, notably introducing a unique version of the actor model. Swift 5.9, was released in September 2023 and includes a macro system, generic parameter packs, and ownership features like the new consume operator. Swift 5.10, released in March 2024, improves the language's concurrency model, allowing for full data isolation to prevent data races. Swift 6 was released in September 2024. Swift 6.1 was released in March 2025. It includes "new language enhancements to improve productivity, diagnostics improvements, package traits, and ongoing work to improve data-race safety usability and compile times." Swift won first place for Most Loved Programming Language in the Stack Overflow Developer Survey 2015 and second place in 2016. On December 3, 2015, the Swift language, supporting libraries, debugger, and package manager were open-sourced under the Apache 2.0 license with a Runtime Library Exception, and Swift.org was created to host the project. The source code is hosted on GitHub, where it is easy for anyone to get the code, build it themselves, and even create pull requests to contribute code back to the project. In December 2015, IBM announced its Swift Sandbox website, which allows developers to write Swift code in one pane and display output in another. The Swift Sandbox was deprecated in January 2018. During the WWDC 2016, Apple announced an iPad exclusive app, named Swift Playgrounds, intended to teach people how to code in Swift. The app is presented in a 3D video game-like interface which provides feedback when lines of code are placed in a certain order and executed. In January 2017, Chris Lattner announced his departure from Apple for a new position with Tesla Motors, with the Swift project lead role going to team veteran Ted Kremenek. During WWDC 2019, Apple announced SwiftUI with Xcode 11, which provides a framework for declarative UI structure design across all Apple platforms. Official downloads of the SDK and toolchain for the Ubuntu distribution of Linux have been available since Swift 2.2, with more distros added since Swift 5.2.4, CentOS and Amazon Linux. There is an unofficial SDK and native toolchain package for Android too. === Platforms === The platforms Swift supports are Apple's operating systems (Darwin, iOS, iPadOS, macOS, tvOS, watchOS), Linux, Windows, and Android. A key aspect of Swift's design is its ability to interoperate with the huge body of existing Objective-C code developed for Apple products over the previous decades, such as Cocoa and the Cocoa Touch frameworks. On Apple platforms, it links with the Objective-C runtime library, which allows C, Objective-C, C++ and Swift code to run within one program. === Version history === == Features == Swift is a general purpose programming language that employs modern programming-language theory concepts and strives to present a simple, yet powerful syntax. Swift incorporates innovations and conventions from various programming languages, with notable inspiration from Objective-C, which it replaced as the primary development language on Apple Platforms. Swift was designed to be safe and friendly to new programmers while not sacrificing speed. By default Swift manages all memory automatically and ensures variables are always initialized before use. Array accesses are checked for out-of-bounds errors and integer operations are checked for overflow. Parameter names allow creating clear APIs. Protocols define interfaces that types may adopt, while extensions allow developers to add more function to existing types. Swift enables object-oriented programming with the support for classes, subtyping, and method overriding. Optionals allow nil values to be handled explicitly and safely. Concurrent programs can be written using async/await syntax, and actors isolate shared mutable state in order to eliminate data races. === Basic syntax === Swift's syntax is similar to C-style languages. Code begins executing in the global scope by default. Alternatively, the @main attribute can be applied to a structure, class, or enumeration declaration to indicate that it contains the program's entry point. Swift's "Hello, World!" program is:The print(_:separator:terminator:) function used here is included in Swift's standard library, which is available to all programs without the need to import external modules. Statements in Swift don't have to end with a semicolon, however semicolons are required to separate multiple statements written on the same line. Single-line comments begin with // and continue until the end of the current line. Multiline comments are contained by /* and */ characters. Constants are declared with the let keyword and variables with the var keyword. Values must be initialized before they are read. Values may infer their type based on the type of the provided initial value. If the initial value is set after the value's declaration, a type must be declared explicitly.Control flow in Swift is managed with if-else, guard, and switch statements, along with while and for-in loops. The if statements take a Boolean parameter and execute the body of the if statement if the condition is true, otherwise it executes the optional else body. if-let syntax provides syntactic sugar for checking for the existence of an optional value and unwrapping it at the same time.Functions are defined with the func keyword. Function parameters may have names which allow function calls to read like phrases. An underscore before the parameter name allows the argument label to be omitted from the call site. Tuples can be used by functions to return multiple pieces of data at once.Functions, and anonymous functions known as closures, can be assigned to properties and passed around the program like any other value.guard statements require that the given condition is true before continuing on past the guard statement, otherwise the body of the provided else clause is run. The else clause must exit control of the code block in which the guard statement appears. guard statements are useful for ensuring that certain requirements are met before continuing on with program execution. In particular they can be used to create an unwrapped version of an optional value that is guaranteed to be non-nil for the remainder of the enclosing scope.switch statements compare a value with multiple potential values and then executes an associated code block. switch statements must be made exhaustive, either by including cases for all possible values or by including a default case which is run when the provided value doesn't match any of the other cases. switch cases do not implicitly fall through, although they may explicitly do so with the fallthrough keyword. Pattern matching can be used in various ways inside switch statements. Here is an example of an integer being matched against a number of potential ranges:for-in loops iterate over a sequence of values:while loops iterate as long as the given Boolean condition evaluates to true: === Closure support === Swift supports closures, which are self-contained blocks of functionality that can be passed around and used in code, and can also be used as anonymous functions. Here are some examples: Closures can be assigned to variables and constants, and can be passed into other functions or closures as parameters. Single-expression closures may drop the return keyword. Swift also has a trailing closure syntax, which allows the closure to be written after the end of the function call instead of within the function's parameter list. Parentheses can be omitted altogether if the closure is the function's only parameter: Starting from version 5.3, Swift supports multiple trailing closures: Swift will provide shorthand argument names for inline closures, removing the need to explicitly name all of the closures parameters. Arguments can be referred to with the names $0, $1, $2, and so on:Closures may capture values from their surrounding scope. The closure will refer to this captured value for as long as the closure exists: === String support === The Swift standard library includes unicode-compliant String and Character types. String values can be initialized with a String literal, a sequence of characters surrounded by double quotation marks. Strings can be concatenated with the + operator: String interpolation allows for the creation of a new string from other values and expressions. Values written between parentheses preceded by a \ will be inserted into the enclosing string literal: A for-in loop can be used to iterate over the characters contained in a string: If the Foundation framework is imported, Swift invisibly bridges the String type to NSString (the String class commonly used in Objective-C). === Callable objects === === Access control === Swift supports five access control levels for symbols: open, public, internal, fileprivate, and private. Unlike many object-oriented languages, these access controls ignore inheritance hierarchies: private indicates that a symbol is accessible only in the immediate scope, fileprivate indicates it is accessible only from within the file, internal indicates it is accessible within the containing module, public indicates it is accessible from any module, and open (only for classes and their methods) indicates that the class may be subclassed outside of the module. === Optionals and chaining === An important feature in Swift is option types, which allow references or values to operate in a manner similar to the common pattern in C, where a pointer may either refer to a specific value or no value at all. This implies that non-optional types cannot result in a null-pointer error; the compiler can ensure this is not possible. Optional types are created with the Optional enum. To make an Integer that is nullable, one would use a declaration similar to var optionalInteger: Optional<Int>. As in C#, Swift also includes syntactic sugar for this, allowing one to indicate a variable is optional by placing a question mark after the type name, var optionalInteger: Int?. Variables or constants that are marked optional either have a value of the underlying type or are nil. Optional types wrap the base type, resulting in a different instance. String and String? are fundamentally different types, the former is of type String while the latter is an Optional that may be holding some String value. To access the value inside, assuming it is not nil, it must be unwrapped to expose the instance inside. This is performed with the ! operator: In this case, the ! operator unwraps anOptionalInstance to expose the instance inside, allowing the method call to be made on it. If anOptionalInstance is nil, a null-pointer error occurs, terminating the program. This is known as force unwrapping. Optionals may be safely unwrapped using optional chaining which first tests whether the instance is nil, and then unwrap it if it is non-null: In this case the runtime calls someMethod only if anOptionalInstance is not nil, suppressing the error. A ? must be placed after every optional property. If any of these properties are nil the entire expression evaluates as nil. The origin of the term chaining comes from the more common case where several method calls/getters are chained together. For instance: can be reduced to: Swift's use of optionals allows the compiler to use static dispatch because the unwrapping action is called on a defined instance (the wrapper), versus occurring in a runtime dispatch system. === Value types === In many object-oriented languages, objects are represented internally in two parts. The object is stored as a block of data placed on the heap, while the name (or "handle") to that object is represented by a pointer. Objects are passed between methods by copying the value of the pointer, allowing the same underlying data on the heap to be accessed by anyone with a copy. In contrast, basic types like integers and floating-point values are represented directly; the handle contains the data, not a pointer to it, and that data is passed directly to methods by copying. These styles of access are termed pass-by-reference in the case of objects, and pass-by-value for basic types. Both concepts have their advantages and disadvantages. Objects are useful when the data is large, like the description of a window or the contents of a document. In these cases, access to that data is provided by copying a 32- or 64-bit value, versus copying an entire data structure. However, smaller values like integers are the same size as pointers (typically both are one word), so there is no advantage to passing a pointer, versus passing the value. Swift offers built-in support for objects using either pass-by-reference or pass-by-value semantics, the former using the class declaration and the latter using struct. Structs in Swift have almost all the same features as classes: methods, implementing protocols and using the extension mechanisms. For this reason, Apple terms all data generically as instances, versus objects or values. Structs do not support inheritance, however. The programmer is free to choose which semantics are more appropriate for each data structure in the application. Larger structures like windows would be defined as classes, allowing them to be passed around as pointers. Smaller structures, like a 2D point, can be defined as structs, which will be pass-by-value and allow direct access to their internal data with no indirection or reference counting. The performance improvement inherent to the pass-by-value concept is such that Swift uses these types for almost all common data types, including Int and Double, and types normally represented by objects, like String and Array. Using value types can result in significant performance improvements in user applications as well. Array, Dictionary, and Set all utilize copy on write so that their data are copied only if and when the program attempts to change a value in them. This means that the various accessors have what is in effect a pointer to the same data storage. So while the data is physically stored as one instance in memory, at the level of the application, these values are separate and physical separation is enforced by copy on write only if needed. === Extensions === Extensions add new functionality to an existing type, without the need to subclass or even have access to the original source code. Extensions can add new methods, initializers, computed properties, subscripts, and protocol conformances. An example might be to add a spell checker to the base String type, which means all instances of String in the program gain the ability to spell-check. The system is also widely used as an organizational technique, allowing related code to be gathered into library-like extensions. Extensions are declared with the extension keyword. === Protocol-oriented programming === Protocols promise that a particular type implements a set of methods or properties, meaning that other instances in the system can call those methods on any instance implementing that protocol. This is often used in modern object-oriented languages as a substitute for multiple inheritance, although the feature sets are not entirely similar. In Objective-C, and most other languages implementing the protocol concept, it is up to the programmer to ensure that the required methods are implemented in each class. Swift adds the ability to add these methods using extensions, and to use generic programming (generics) to implement them. Combined, these allow protocols to be written once and support a wide variety of instances. Also, the extension mechanism can be used to add protocol conformance to an object that does not list that protocol in its definition. For example, a protocol might be declared called Printable, which ensures that instances that conform to the protocol implement a description property and a printDetails() method requirement: This protocol can now be adopted by other types: Extensions can be used to add protocol conformance to types. Protocols themselves can also be extended to provide default implementations of their requirements. Adopters may define their own implementations, or they may use the default implementation: In Swift, like many modern languages supporting interfaces, protocols can be used as types, which means variables and methods can be defined by protocol instead of their specific type: It does not matter what concrete type of someSortOfPrintableInstance is, the compiler will ensure that it conforms to the protocol and thus this code is safe. This syntax also means that collections can be based on protocols also, like let printableArray = [any Printable]. Both extensions and protocols are used extensively in Swift's standard library; in Swift 5.9, approximately 1.2 percent of all symbols within the standard library were protocols, and another 12.3 percent were protocol requirements or default implementations. For instance, Swift uses extensions to add the Equatable protocol to many of their basic types, like Strings and Arrays, allowing them to be compared with the == operator. The Equatable protocol also defines this default implementation: This function defines a method that works on any instance conforming to Equatable, providing a not equals operator. Any instance, class or struct, automatically gains this implementation simply by conforming to Equatable. Protocols, extensions, and generics can be combined to create sophisticated APIs. For example, constraints allow types to conditionally adopt protocols or methods based on the characteristics of the adopting type. A common use case may be adding a method on collection types only when the elements contained within the collection are Equatable: === Concurrency === Swift 5.5 introduced structured concurrency into the language. Structured concurrency uses Async/await syntax similar to Kotlin, JavaScript, and Rust. An async function is defined with the async keyword after the parameter list. When calling an async function the await keyword must be written before the function to indicate that execution will potentially suspend while calling function. While a function is suspended the program may run some other concurrent function in the same program. This syntax allows programs to clearly call out potential suspension points and avoid a version of the Pyramid of Doom caused by the previously widespread use of closure callbacks. The async let syntax allows multiple functions to run in parallel. await is again used to mark the point at which the program will suspend to wait for the completion of the async functions called earlier.Tasks and TaskGroups can be created explicitly to create a dynamic number of child tasks during runtime:Swift uses the Actor model to isolate mutable state, allowing different tasks to mutate shared state in a safe manner. Actors are declared with the actor keyword and are reference types, like classes. Only one task may access the mutable state of an actor at the same time. Actors may access and mutate their own internal state freely, but code running in separate tasks must mark each access with the await keyword to indicate that the code may suspend until other tasks finish accessing the actor's state. === Libraries, runtime, development === On Apple systems, Swift uses the same runtime as the extant Objective-C system, but requires iOS 7 or macOS 10.9 or higher. It also depends on Grand Central Dispatch. Swift and Objective-C code can be used in one program, and by extension, C and C++ also. Beginning in Swift 5.9, C++ code can be used directly from Swift code. In the case of Objective-C, Swift has considerable access to the object model, and can be used to subclass, extend and use Objective-C code to provide protocol support. The converse is not true: a Swift class cannot be subclassed in Objective-C. To aid development of such programs, and the re-use of extant code, Xcode 6 and higher offers a semi-automated system that builds and maintains a bridging header to expose Objective-C code to Swift. This takes the form of an additional header file that simply defines or imports all of the Objective-C symbols that are needed by the project's Swift code. At that point, Swift can refer to the types, functions, and variables declared in those imports as though they were written in Swift. Objective-C code can also use Swift code directly, by importing an automatically maintained header file with Objective-C declarations of the project's Swift symbols. For instance, an Objective-C file in a mixed project called "MyApp" could access Swift classes or functions with the code #import "MyApp-Swift.h". Not all symbols are available through this mechanism, however—use of Swift-specific features like generic types, non-object optional types, sophisticated enums, or even Unicode identifiers may render a symbol inaccessible from Objective-C. Swift also has limited support for attributes, metadata that is read by the development environment, and is not necessarily part of the compiled code. Like Objective-C, attributes use the @ syntax, but the currently available set is small. One example is the @IBOutlet attribute, which marks a given value in the code as an outlet, available for use within Interface Builder (IB). An outlet is a device that binds the value of the on-screen display to an object in code. On non-Apple systems, Swift does not depend on an Objective-C runtime or other Apple system libraries; a set of Swift "Corelib" implementations replace them. These include a "swift-corelibs-foundation" to stand in for the Foundation Kit, a "swift-corelibs-libdispatch" to stand in for the Grand Central Dispatch, and an "swift-corelibs-xctest" to stand in for the XCTest APIs from Xcode. As of 2019, with Xcode 11, Apple has also added a major new UI paradigm called SwiftUI. SwiftUI replaces the older Interface Builder paradigm with a new declarative development paradigm. === Memory management === Swift uses Automatic Reference Counting (ARC) to manage memory. Every instance of a class or closure maintains a reference count which keeps a running tally of the number of references the program is holding on to. When this count reaches 0 the instance is deallocated. This automatic deallocation removes the need for a garbage collector as instances are deallocated as soon as they are no longer needed. A strong reference cycle can occur if two instances each strongly reference each other (e.g. A references B, B references A). Since neither instances reference count can ever reach zero neither is ever deallocated, resulting in a memory leak. Swift provides the keywords weak and unowned to prevent strong reference cycles. These keywords allow an instance to be referenced without incrementing its reference count. weak references must be optional variables, since they can change and become nil. Attempting to access an unowned value that has already been deallocated results in a runtime error. A closure within a class can also create a strong reference cycle by capturing self references. Self references to be treated as weak or unowned can be indicated using a capture list. === Debugging === A key element of the Swift system is its ability to be cleanly debugged and run within the development environment, using a read–eval–print loop (REPL), giving it interactive properties more in common with the scripting abilities of Python than traditional system programming languages. The REPL is further enhanced with playgrounds, interactive views running within the Xcode environment or Playgrounds app that respond to code or debugger changes on-the-fly. Playgrounds allow programmers to add in Swift code along with markdown documentation. Programmers can step through code and add breakpoints using LLDB either in a console or an IDE like Xcode. == Comparisons to other languages == Swift is considered a C family programming language and is similar to C in various ways: Most operators in C also appear in Swift, although some operators such as + have slightly different behavior. For example, in Swift, + traps on overflow, whereas &+ is used to denote the C-like behavior of wrapping on overflow. Curly braces are used to group statements. Variables are assigned using an equals sign, but compared using two consecutive equals signs. A new identity operator, ===, is provided to check if two data elements refer to the same object. Control statements while, if, and switch are similar, but have extended functions, e.g., a switch that takes non-integer cases, while and if supporting pattern matching and conditionally unwrapping optionals, for uses the for i in 1...10 syntax. Square brackets are used with arrays, both to declare them and to get a value at a given index in one of them. It also has similarities to Objective-C: Basic numeric types: Int, UInt, Float, Double Class methods are inherited, like instance methods; self in class methods is the class the method was called on. Similar for...in enumeration syntax. Differences from Objective-C include: Statements need not end with semicolons (;), though these must be used to allow more than one statement on one line. No header files. Uses type inference. Generic programming. Functions are first-class objects. Enumeration cases can have associated data (algebraic data types). Operators can be redefined for classes (operator overloading), and new operators can be defined. Strings fully support Unicode. Most Unicode characters can be used in either identifiers or operators. No exception handling. Swift 2 introduces a different and incompatible error-handling model. Several features of earlier C-family languages that are easy to misuse have been removed: Pointers are not exposed by default. There is no need for the programmer to keep track of and mark names for referencing or dereferencing. Assignments return no value. This prevents the common error of writing i = 0 instead of i == 0 (which throws a compile-time error). No need to use break statements in switch blocks. Individual cases do not fall through to the next case unless the fallthrough statement is used. Variables and constants are always initialized and array bounds are always checked. Integer overflows, which result in undefined behavior for signed integers in C, are trapped as a run-time error in Swift. Programmers can choose to allow overflows by using the special arithmetical operators &+, &-, &*, &/ and &%. The properties min and max are defined in Swift for all integer types and can be used to safely check for potential overflows, versus relying on constants defined for each type in external libraries. The one-statement form of if and while, which allows for the omission of braces around the statement, is unsupported. C-style enumeration for (int i = 0; i < c; i++), which is prone to off-by-one errors, is unsupported (from Swift 3 onward). The pre- and post- increment and decrement operators (i++, --i ...) are unsupported (from Swift 3 onward), more so since C-style for statements are also unsupported from Swift 3 onward. == Development and other implementations == Because Swift can run on Linux, it is sometimes also used as a server-side language. Some web frameworks have been developed, such as IBM's Kitura (now discontinued), Perfect, Vapor, and Hummingbird. An official "Server APIs" work group has also been started by Apple, with members of the Swift developer community playing a central role. A second free implementation of Swift that targets Cocoa, Microsoft's Common Language Infrastructure (.NET Framework, now .NET), and the Java and Android platform exists as part of the Elements Compiler from RemObjects Software. Subsets of Swift have been ported to additional platforms, such as Arduino and Mac OS 9. == See also == Comparison of programming languages Objective-C D (programming language) Kotlin (programming language) Nim (programming language) Python (programming language) Realm (database) == References == == External links == Official website Swift at Apple Developer Swift source code on GitHub
https://en.wikipedia.org/wiki/Swift_(programming_language)
Predicative programming is the original name of a formal method for program specification and refinement, more recently called a Practical Theory of Programming, invented by Eric Hehner. The central idea is that each specification is a binary (boolean) expression that is true of acceptable computer behaviors and false of unacceptable behaviors. It follows that refinement is just implication. This is the simplest formal method, and the most general, applying to sequential, parallel, stand-alone, communicating, terminating, nonterminating, natural-time, real-time, deterministic, and probabilistic programs, and includes time and space bounds. Commands in a programming language are considered to be a special case of specification—those specifications that are compilable. For example, if the program variables are x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} , the command x {\displaystyle x} := y {\displaystyle y} +1 is equivalent to the specification (binary expression) x ′ {\displaystyle x'} = y {\displaystyle y} +1 ∧ y ′ {\displaystyle y'} = y {\displaystyle y} ∧ z ′ {\displaystyle z'} = z {\displaystyle z} in which x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} represent the values of the program variables before the assignment, and x ′ {\displaystyle x'} , y ′ {\displaystyle y'} , and z ′ {\displaystyle z'} represent the values of the program variables after the assignment. If the specification is x ′ {\displaystyle x'} > y {\displaystyle y} , we easily prove ( x {\displaystyle x} := y {\displaystyle y} +1) ⇒ ( x ′ {\displaystyle x'} > y {\displaystyle y} ), which says that x {\displaystyle x} := y {\displaystyle y} +1 implies, or refines, or implements x ′ {\displaystyle x'} > y {\displaystyle y} . Loop proofs are greatly simplified. For example, if x {\displaystyle x} is an integer variable, to prove that while x {\displaystyle x} >0 do x {\displaystyle x} := x {\displaystyle x} –1 od refines, or implements the specification x {\displaystyle x} ≥0 ⇒ x ′ {\displaystyle x'} =0, prove if x {\displaystyle x} >0 then x {\displaystyle x} := x {\displaystyle x} –1; ( x {\displaystyle x} ≥0 ⇒ x ′ {\displaystyle x'} =0) else o k {\displaystyle ok} fi ⇒ ( x {\displaystyle x} ≥0 ⇒ x ′ {\displaystyle x'} =0) where o k {\displaystyle ok} = ( x ′ {\displaystyle x'} = x {\displaystyle x} ) is the empty, or do-nothing command. There is no need for a loop invariant or least fixed point. Loops with multiple intermediate shallow and deep exits work the same way. This simplified form of proof is possible because program commands and specifications can be mixed together meaningfully. Execution time (upper bounds, lower bounds, exact time) can be proven the same way, just by introducing a time variable. To prove termination, prove the execution time is finite. To prove nontermination, prove the execution time is infinite. For example, if the time variable is t {\displaystyle t} , and time is measured by counting iterations, then to prove that execution of the previous while-loop takes time x {\displaystyle x} when x {\displaystyle x} is initially nonnegative, and takes forever when x {\displaystyle x} is initially negative, prove if x {\displaystyle x} >0 then x {\displaystyle x} := x {\displaystyle x} –1; t {\displaystyle t} := t {\displaystyle t} +1; ( x {\displaystyle x} ≥0 ⇒ t ′ {\displaystyle t'} = t {\displaystyle t} + x {\displaystyle x} ) ∧ ( x {\displaystyle x} <0 ⇒ t ′ {\displaystyle t'} =∞) else o k {\displaystyle ok} fi ⇒ ( x {\displaystyle x} ≥0 ⇒ t ′ {\displaystyle t'} = t {\displaystyle t} + x {\displaystyle x} ) ∧ ( x {\displaystyle x} <0 ⇒ t ′ {\displaystyle t'} =∞) where o k {\displaystyle ok} = ( x ′ {\displaystyle x'} = x {\displaystyle x} ∧ t ′ {\displaystyle t'} = t {\displaystyle t} ). == References == E.C.R. Hehner, a Practical Theory of Programming, Springer-Verlag 1993. Most recent edition online at a Practical Theory of Programming. == External links == Publications by Eric Hehner.
https://en.wikipedia.org/wiki/Predicative_programming
Electronic programming guides (EPGs) and interactive programming guides (IPGs) are menu-based systems that provide users of television, radio, and other media applications with continuously updated menus that display scheduling information for current and upcoming broadcast programming (most commonly, TV listings). Some guides also feature backward scrolling to promote their catch up content. They are commonly known as guides or TV guides. Non-interactive electronic programming guides (sometimes known as "navigation software") are typically available for television and radio, and consist of a digitally displayed, non-interactive menu of programming scheduling information shown by a cable or satellite television provider to its viewers on a dedicated channel. EPGs are transmitted by specialized video character generation (CG) equipment housed within each such provider's central headend facility. By tuning into an EPG channel, a menu is displayed that lists current and upcoming television shows on all available channels. A more modern form of the EPG, associated with both television and radio broadcasting, is the interactive [electronic] programming guide (IPG, though often referred to as EPG). An IPG allows television viewers and radio listeners to navigate scheduling information menus interactively, selecting and discovering programming by time, title, channel or genre using an input device such as a keypad, computer keyboard or television remote control. Its interactive menus are generated entirely within local receiving or display equipment using raw scheduling data sent by individual broadcast stations or centralized scheduling information providers. A typical IPG provides information covering a span of seven or 14 days. Data used to populate an interactive EPG may be distributed over the Internet, either for a charge or free of charge, and implemented on equipment connected directly or through a computer to the Internet. Television-based IPGs in conjunction with Programme Delivery Control (PDC) technology can also facilitate the selection of TV shows for recording with digital video recorders (DVRs), also known as personal video recorders (PVRs). == History == === Key events === ==== North America ==== In 1981, United Video Satellite Group launched the first EPG service in North America, a cable channel known simply as The Electronic Program Guide. It allowed cable systems in the United States and Canada to provide on-screen listings to their subscribers 24 hours a day (displaying programming information up to 90 minutes in advance) on a dedicated cable channel. Raw listings data for the service was supplied via satellite to participating cable systems, each of which installed a computer within its headend facility to present that data to subscribers in a format customized to the system's unique channel lineup. The EPG Channel would later be renamed Prevue Guide and go on to serve as the de facto EPG service for North American cable systems throughout the remainder of the 1980s, the entirety of the 1990s, and – as TV Guide Network or TV Guide Channel – for the first decade of the 21st century. In 1986 at a trade show in Nashville, STV/Onsat, a print programming guide publisher, introduced SuperGuide, an interactive electronic programming guide for home satellite dish viewers. The system was the focus of a 1987 article in STV Magazine. The original system had a black-and-white display, and would locally store programming information for around one week in time. A remote control was used to interact with the unit. When the user found a show they wanted to watch, they would have to turn off the guide and then tune the satellite receiver to the correct service. The system was developed by Chris Schultheiss of STV/OnSat and engineer Peter Hallenbeck. The guide information was distributed by satellite using the home owner's dish as the receiver. The information was stored locally so that the user could use the guide without having to be on a particular satellite or service. In March 1990, a second generation SuperGuide system was introduced that was integrated into the Uniden 4800 receiver. This version had a color display and the hardware was based on a custom chip; it was also able to disseminate up to two weeks of programming information. When the user found the show of interest, they pressed a button on the remote and the receiver tuned to the show they wanted to watch. This unit also had a single button recording function, and controlled VCRs via an infrared output. Available in North America, it was the first commercially available unit for home use that had a locally stored guide integrated with the receiver for single button viewing and taping. A presentation on the system was given at the 1990 IEEE consumer electronics symposium in Chicago. In June 1988 a patent was awarded that concerned the implementation of a searchable electronic program guide – an interactive program guide (IPG). TV Guide Magazine and Liberty Media established a joint venture in 1992 known as TV Guide On Screen to develop an EPG. The joint venture was led by video game veteran, Bruce Davis, and introduced an interactive program guide to the market in late 1995 in the General Instrument CFT2200 set-top cable box. Leading competitors to TV Guide On Screen included Prevue Guide and StarSight Telecast. Telecommunications Inc, owner of Liberty Media, acquired United Video Satellite Group, owner of Prevue Guide, in 1995. TV Guide On Screen and Prevue Guide were later merged. TV Guide On Screen for digital cable set top boxes premiered in the DigiCable series of set top boxes from General Instrument shortly thereafter. See wiki on TV Guide for subsequent developments. Scientific Atlanta introduced the 8600X Advanced analog Set-top box in 1993 that included an interactive electronic program guide, downloadable software, 2-way communications, and pause/FF/REW for VCR-like viewing. Millions were deployed by Time Warner and other customers. ==== Western Europe ==== In Western Europe, 59 million television households were equipped with EPGs at the end of 2008, a penetration of 36% of all television households. The situation varies from country to country, depending on the status of digitization and the role of pay television and IPTV in each market. With Sky as an early mover and the BBC iPlayer and Virgin Media as ambitious followers, the United Kingdom is the most developed and innovative EPG market to date, with 96% of viewers having frequently used an EPG in 2010. Inview Technology is one of the UK's largest and oldest EPG producers, dating back to 1996 and currently in partnership with Humax and Skyworth. Scandinavia also is a highly innovative EPG market. Even in Italy, the EPG penetration is relatively high with 38%. In France, IPTV is the main driver of EPG developments. In contrast to many other European countries, Germany lags behind, due to a relatively slow digitization process and the minor role of pay television in that country. == Current applications == Interactive program guides are nearly ubiquitous in most broadcast media today. EPGs can be made available through television (on set-top boxes and all current digital TV receivers), mobile phones (particularly through smartphone apps), and on the Internet. Online TV Guides are becoming more ubiquitous, with over seven million searches for "TV Guide" being logged each month on Google. For television, IPG support is built into almost all modern receivers for digital cable, digital satellite, and over-the-air digital broadcasting. They are also commonly featured in digital video recorders such as TiVo and MythTV. Higher-end receivers for digital broadcast radio and digital satellite radio commonly feature built-in IPGs as well. Demand for non-interactive electronic television program guides – television channels displaying listings for currently airing and upcoming programming – has been nearly eliminated by the widespread availability of interactive program guides for television; TV Guide Network, the largest of these services, eventually abandoned its original purpose as a non-interactive EPG service and became a traditional general entertainment cable channel, eventually rebranding as Pop in January 2015. Television-based IPGs provide the same information as EPGs, but faster and often in much more detail. When television IPGs are supported by PVRs, they enable viewers to plan viewing and recording by selecting broadcasts directly from the EPG, rather than programming timers. The aspect of an IPG most noticed by users is its graphical user interface (GUI), typically a grid or table listing channel names and program titles and times: web and television-based IPG interfaces allow the user to highlight any given listing and call up additional information about it supplied by the EPG provider. Programs on offer from subchannels may also be listed. Typical IPGs also allow users the option of searching by genre, as well as immediate one-touch access to, or recording of, a selected program. Reminders and parental control functions are also often included. The IPGs within some DirecTV IRDs can control a VCR using an attached infrared emitter that emulates its remote control. The latest development in IPGs is personalization through a recommendation engine or semantics. Semantics are used to permit interest-based suggestions to one or several viewers on what to watch or record based on past patterns. One such IPG, iFanzy, allows users to customize its appearance. Standards for delivery of scheduling information to television-based IPGs vary from application to application, and by country. Older television IPGs like Guide Plus+ relied on analog technology (such as the vertical blanking interval of analog television video signals) to distribute listings data to IPG-enabled consumer receiving equipment. In Europe, the European Telecommunications Standards Institute (ETSI) published standard ETS 300 707 to standardize the delivery of IPG data over digital television broadcast signals. Listings data for IPGs integrated into digital terrestrial television and radio receivers of the present day is typically sent within each station's MPEG transport stream, or alongside it in a special data stream. The ATSC standard for digital terrestrial television, for instance, uses tables sent in each station's PSIP. These tables are meant to contain program start times and titles along with additional program descriptive metadata. Current time signals are also included for on-screen display purposes, and they are also used to set timers on recording devices. Devices embedded within modern digital cable and satellite television receivers, on the other hand, customarily rely upon third-party listings metadata aggregators to provide them with their on-screen listings data. Such companies include Tribune TV Data (now Gracenote, part of Nielsen Holdings), Gemstar-TV Guide (now TiVo Corporation), FYI Television, Inc. in the United States and Europe; TV Media in the United States and Canada; Broadcasting Dataservices in Europe and Dayscript in Latin America; and What's On India Media Pvt. Ltd in India, Sri Lanka, Indonesia, the Middle East and Asia. Some IPG systems built into older set-top boxes designed to receive terrestrial digital signals and television sets with built-in digital tuners may have a lesser degree of interactive features compared to those included in cable, satellite and IPTV converters; technical limitations in these models may prevent users from accessing program listings beyond (at maximum) 16 hours in advance and complete program synopses, and the inability for the IPG to parse synopses for certain programs from the MPEG stream or displaying next-day listings until at or after 12:00 a.m. local time. IPGs built into newer television (including Smart TV), digital terrestrial set-top box and antenna-ready DVR models feature on-screen displays and interactive guide features more comparable to their pay television set-top counterparts, including the ability to display grids and, in the case of DVRs intended for terrestrial use, the ability – with an Internet connection – to access listings and content from over-the-top services. A growing trend is for manufacturers such as Elgato and Topfield and software developers such as Microsoft in their Windows Media Center to use an Internet connection to acquire data for their built-in IPGs. This enables greater interactivity with the IPG such as media downloads, series recording and programming of the recordings for the IPG remotely; for example, IceTV in Australia enables TiVo-like services to competing DVR/PVR manufacturers and software companies. In developing IPG software, manufacturers must include functions to address the growing volumes of increasingly complex data associated with programming. This data includes program descriptions, schedules and parental television ratings, along with flags for technical and access features such as display formats, closed captioning and Descriptive Video Service. They must also include user configuration information such as favorite channel lists, and multimedia content. To meet this need, some set-top box software designs incorporate a "database layer" that utilizes either proprietary functions or a commercial off-the-shelf embedded database system for sorting, storing and retrieving programming data. == See also == Digital video recorders NexTView Teletext TV Genius Video on demand MythTV Schedules Direct == References == == External links == "Electronic Programme Guide; Protocol for a TV Guide using electronic data transmission" (PDF). ETSI. April 2003. 300 707 V1.2.1 "Television systems; Code of practice for an Electronic Programme Guide" (PDF). ETSI. December 2002. TR 101 288 V1.3.1
https://en.wikipedia.org/wiki/Electronic_program_guide
In computer programming, redundant code is source code or compiled code in a computer program that is unnecessary, such as: recomputing a value that has previously been calculated and is still available, code that is never executed (known as unreachable code), code which is executed but has no external effect (e.g., does not change the output produced by a program; known as dead code). A NOP instruction might be considered to be redundant code that has been explicitly inserted to pad out the instruction stream or introduce a time delay, for example to create a timing loop by "wasting time". Identifiers that are declared, but never referenced, are termed redundant declarations. == Examples == The following examples are in C. The second iX*2 expression is redundant code and can be replaced by a reference to the variable iY. Alternatively, the definition int iY = iX*2 can instead be removed. Consider: As a consequence of using the C preprocessor, the compiler will only see the expanded form: Because the use of min/max macros is very common, modern compilers are programmed to recognize and eliminate redundancy caused by their use. There is no redundancy, however, in the following code: If the initial call to rand(), modulo range, is greater than or equal to cutoff, rand() will be called a second time for a second computation of rand()%range, which may result in a value that is actually lower than the cutoff. The max macro thus may not produce the intended behavior for this function. == See also == Code bloat Code reuse Common subexpression elimination Don't repeat yourself Duplicate code Redundancy == References ==
https://en.wikipedia.org/wiki/Redundant_code
Pay television, also known as subscription television, premium television or, when referring to an individual service, a premium channel, refers to subscription-based television services, usually provided by multichannel television providers, but also increasingly via digital terrestrial and streaming television. In the United States, subscription television began in the late 1970s and early 1980s in the form of encrypted analog over-the-air broadcast television which could be decrypted with special equipment. The concept rapidly expanded through the multi-channel transition and into the post-network era. Other parts of the world beyond the United States, such as France and Latin America have also offered encrypted analog terrestrial signals available for subscription. The term is most synonymous with premium entertainment services focused on films or general entertainment programming such as, in the United States, Cinemax, HBO, MGM+, Showtime, and Starz, but such services can also include those devoted to sports, as well as adult entertainment. == Business model == In contrast to most other multichannel television broadcasters, which depend on advertising and carriage fees as their sources of revenue, the majority of pay television services rely almost solely on monthly subscription fees paid by individual customers. As a result, pay television outlets are most concerned with offering content that can justify the cost of the service, which helps to attract new subscribers, and retain existing subscribers. Many pay television services consist of multiple individual channels, referred to as "multiplex" services (in reference to multiplex cinemas), where a main flagship channel is accompanied by secondary services with distinct schedules focusing on specific genres and audiences (such as multiplexes focusing more on "classic" films, or family-oriented programming), time shifting, or brand licensing deals (such as channels focusing specifically on Disney films, or content from American pay television brands if they do not specifically run their own network in a specific market). Typically, these services are bundled together with the main channel at no additional charge and cannot be purchased separately. Depending on local regulations, pay television services generally have more lenient content standards because of their relatively narrower distribution, and not being subject to pressure from sponsors to tone down content. As a result, programming is typically aired with limited to no edits for time or, where applicable, mature content such as graphic violence, profanity, nudity, and sexual activity. As premium television services are commonly devoid of traditional commercial advertising, breaks between programming typically include promotions for upcoming programs, and interstitial segments (such as behind-the-scenes content, interviews, and other feature segments). Some sports-based pay services, however, may feature some commercial advertising, particularly if they simulcast sporting events that are broadcast by advertiser-supported television networks. In addition, most general interest or movie-based pay services do not adhere to the common top and bottom of the hour scheduling of other cable channels and terrestrial broadcasters. As such, programs often air using either conventional scheduling or have airtimes in five-minute increments (for example, 7:05 a.m. or 4:40 p.m.); since such channels broadcast content without in-program break interruptions, this sometimes leads to extended or abbreviated breaks between programs, depending on when the previous program concludes and when the start time of the next program is. The only universal variation to this is prime time, where the main channel in each pay service's suite usually schedules films to start on the hour. === Programming === Films comprise much of the content seen on most pay television services, particularly those with a general entertainment format and those that focus exclusively on films. Services often obtain rights to films through exclusive agreements with film distributors. Films acquired during the original term of license agreements with a distributor may also be broadcast as "sub-runs", in which a service holds rights to film long after the conclusion of a distribution agreement (under this arrangement, the pay service that originally licensed the rights to a particular film title, or one other than that which had held rights, may hold the broadcast rights through a library content deal). Many general interest premium channels also produce original television series. Due to the aforementioned leniency in content standards, they too can contain content that is more mature than those of other cable channels or television networks. These series also tend to be high-budget and aim for critical success in order to attract subscribers: notable premium series, such as Cinemax's Banshee, The Knick, Strike Back, Jett, HBO's Curb Your Enthusiasm, Game of Thrones, Sex and the City, and The Sopranos, and Showtime's Dexter, Homeland, and Weeds, have achieved critical acclaim and have won various television awards. Some premium channels also broadcast television specials, which most commonly consist of concerts and concert films, documentaries, stand-up comedy, and in the past, theatrical plays. Sports programming is also featured on some premium services; HBO was historically known for its broadcasts of boxing, while Showtime and Epix also carry mixed martial arts events. Some general interest premium channels have aired other professional sporting events in the past: HBO for example, carried games from the National Hockey League (NHL), National Basketball Association (NBA) and American Basketball Association (ABA) in its early years, and from 1975 to 1999 aired the Wimbledon tennis tournament. Specialty pay sports channels also exist—often focusing on international sports considered niche to domestic audiences (such as, in the United States, cricket), and are typically sold at a higher expense than traditional premium services. Out-of-market sports packages in North America are multi-channel pay services carrying professional or collegiate sporting events which are sold in a seasonal package. They are typically the most expensive type of pay services, generally running in the range of $35 to $50 per month. Some pay services also offer pornographic films; Cinemax was, initially, well known for carrying a late-night block of softcore films and series known as "Max After Dark"—a reputation that led to the network often being nicknamed "Skinemax" by viewers. This reputation, however, largely died out by the beginning of the mid-90s, as Cinemax had already established a reputation for popular movies and shows, such as Goodfellas being an exclusive premiere on the network in the early 90s. Cinemax eventually phased out the programming completely in the 2010s, citing that it did not align with its current focus on action programming, and that internet porn and the amount of sexual content in other mainstream premium series (such as Game of Thrones) made a specific block for such content redundant. Specialized channels dedicated to pornographic films also exist, that carry either softcore adult programs (such as Playboy TV), or more hardcore content (such as The Erotic Network and Hustler TV). == Pricing and packaging == Pay television channels come in different price ranges. Many channels carrying advertising combines this income with a lower subscription fee. These are called "mini-pay" channels (a term also used for smaller scale commercial-free pay television services) and are often sold as a part of a package with numerous similarly priced channels. Usually, however, the regular pricing for premium channels ranges from just under $10 to near $25 per month per suite, with lower prices available via bundling options with cable or satellite providers, or special limited offers which are available during free preview periods or before the launch of a network's prestige series. However, some other channels, such as sports and adult networks may ask for monthly pricing that may go as high as near $50 a month. There are also premium television services which are priced significantly higher than the mini-pay channels, but they compensate for their higher price by carrying little or no advertising and also providing a higher quality program output. As advertising sales are sensitive to the business cycle, some broadcasters try to balance them with more stable income from subscriptions. Some providers offer services owned by the same company in a single package. For example, American satellite provider DirecTV offers the Encore channels along with the Starz multiplex (both owned by Starz Inc.) in its "Starz Super Pack"; and The Movie Channel, Flix (owned by Paramount, and the latter of which continues to be sold in the DirecTV package despite Showtime Networks no longer owning Sundance TV, that channel is now owned by AMC Networks) along with Showtime in its "Showtime Unlimited" package; Cinemax and its multiplex networks, in turn, are almost always packaged with HBO (both owned by Warner Bros. Discovery). Though selling premium services that are related by ownership as a package is common, that may not always be the situation: for example, in the United States, Cinemax and Encore are optionally sold separately from or in a single package with their respective parent networks HBO and Starz, depending on the service provider. The Movie Channel and Flix meanwhile, are usually sold together with Showtime (all three channels are owned by Paramount Global); though subscribers are required to purchase Showtime in order to receive Flix, The Movie Channel does not have such a restriction as a few providers optionally sell that service without requiring a Showtime subscription. Unlike other cable networks, premium services are almost always subscribed to a la carte, meaning that one can, for example, subscribe to HBO without subscribing to Showtime (in Canada, there are slight modifications, as most providers include American superstations – such as WAPA-TV – with their main premium package by default). However, subscribing to an "individual" service automatically includes access to all of that service's available multiplex channels and, in some cases, access to content via video-on-demand (in the form of a conventional VOD television service, and in some cases, a companion on-demand streaming service as well). Most pay television providers also offer a selection of premium services (for example, the HBO, Showtime and Starz packages) in one bundle at a greatly reduced price than it would cost to purchase each service separately, as an inducement for subscribers to remain with their service provider or for others to induce subscribers into using their service. Similarly, many television providers offer general interest or movie-based premium channels at no additional charge for a trial period, often one to three months, though there have been rare instances of free trials for pay services that last up to one year for newer subscribers to that provider's television service. == Distribution == Pay television has become popular with cable and satellite television. Pay television services often, at least two to three times per year, provide free previews of their services, in order to court potential subscribers by allowing this wider audience to sample the service for a period of days or weeks; these are typically scheduled to showcase major special event programming, such as the pay cable premiere of a blockbuster feature film, the premiere (either a series or season premiere) of a widely anticipated or critically acclaimed original series or occasionally, a high-profile special (such as a concert). Subscription services transmitted via analogue terrestrial television have also existed, to varying degrees of success. The most known example of such service in Europe is Canal+ and its scrambled services, which operated in France from 1984 to the 2011 closedown of analogue television, Spain from 1990 to 2005 and Poland from 1995 to 2001. Some American television stations launched pay services (known simply as "subscription television" services) such as SuperTV, Wometco Home Theater, PRISM (which principally operated as a cable service, only being simultaneously carried over-the-air for a short time during the 1980s, and unlike other general-interest pay services accepted outside advertising for broadcast during its sports telecasts), Preview, SelecTV and ON TV in the late 1970s, but those services disappeared as competition from cable television expanded during the 1980s. In Australia, Foxtel, Optus Television and TransACT are the major pay television distributors, all of which provide cable services in some metropolitan areas, with Foxtel providing satellite service for all other areas where cable is not available. Austar formerly operated as a satellite pay service, until it merged with Foxtel and SelecTV. The major distributors of pay television in New Zealand are Sky Network Television on satellite and Vodafone on cable. In the 2010s, over-the-top subscription video on demand (SVOD) services distributed via internet video emerged as a major competitor to traditional pay television, with services such as Amazon Video, Hulu, and Netflix gaining prominence. Similarly, to pay television services, their libraries include acquired content (which can not only include films, but acquired television series as well), and a mix of original series, films, and specials. The shift towards SVOD has resulted in increasing competition within the sector, with media conglomerates having launched their own services (such as Disney+, Paramount+, Peacock, and Disney's acquisition of the majority of Hulu) to compete, and existing premium networks such as HBO (HBO Now) and Showtime launching direct-to-consumer versions of their existing services to appeal to cord cutters. HBO and Showtime later absorbed their DTC offerings into wider services with a focus on their parent companies' libraries, with HBO Now replaced by HBO Max (now Max) in 2020 (which adds content from other Warner Bros. properties and third-parties, and would also be included with existing HBO subscriptions via television providers), and Showtime formally merging with Paramount+ in 2023. Canadian premium service The Movie Network similarly merged with the CraveTV service owned by parent company Bell Media in 2018. == Ambiguities == === Pay-per-view === Pay-per-view (PPV) services are similar to subscription-based pay television services in that customers must pay to have the broadcast decrypted for viewing, but usually only entail a one-time payment for a single or time-limited viewing. Programs offered via pay-per-view are most often movies or sporting events, but may also include other events, such as concerts and even softcore adult programs. In the U.S., the initial concept and technology for pay-per-view for broadcast television was first developed in the early 1950s, including a crude decrypting of the over-the-air television signal and a decoding box, but never caught on for use at that time. It took another four decades when cable broadcasters started using pay-per-view on a widespread basis. === Free-to-view === "Free" variants are free-to-air (FTA) and free-to-view (FTV); however, FTV services are normally encrypted and decryption cards either come as part of an initial subscription to a pay television bouquet – in other words, an offer of pay-TV channels – or can be purchased for a one-time cost. FTA and FTV systems may still have selective access. ABC Australia is one example, as much of its programming content is free-to-air except for National Rugby League (NRL) games, which are encrypted. == Over-the-air subscription television == ON TV, an over-the-air subscription service that served Chicago, Cincinnati, Dallas/Fort Worth, Detroit, Fort Lauderdale, Phoenix, Salem/Portland and San Francisco. PRISM, an over-the-air and cable television subscription service that served Southeastern Pennsylvania, Southern New Jersey, Delaware and the Delmarva Peninsula. SelecTV, an over-the-air subscription service that served Los Angeles, Milwaukee and Philadelphia and later the Wometco Home Theater territories after WHT ceased its own programming. Spectrum (TV channel), an over-the-air subscription service that served, in early 1980s, the Chicago, Fairbanks, and Minneapolis–St. Paul. SuperTV, an over-the-air subscription service that served Washington, D.C., the Capital and Central regions of Maryland and Northern Virginia. Tele1st - served Chicago, Illinois Wometco Home Theater – an over-the-air subscription service that served New York City, Northern and Central New Jersey, Long Island and Fairfield County, Connecticut. == See also == List of cable television companies List of satellite television companies Satellite television by region Terrestrial television Out-of-market sports package Pay-per-view Cable television piracy Pirate decryption Premium segment Video on demand Pay television in the United States United States pay television content advisory system Subscription television in the Philippines == References ==
https://en.wikipedia.org/wiki/Pay_television
A software bug is a design defect (bug) in computer software. A computer program with many or serious bugs may be described as buggy. The effects of a software bug range from minor (such as a misspelled word in the user interface) to severe (such as frequent crashing). In 2002, a study commissioned by the US Department of Commerce's National Institute of Standards and Technology concluded that "software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product". Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations. == History == == Terminology == Mistake metamorphism (from Greek meta = "change", morph = "form") refers to the evolution of a defect in the final stage of software deployment. Transformation of a mistake committed by an analyst in the early stages of the software development lifecycle, which leads to a defect in the final stage of the cycle has been called mistake metamorphism. Different stages of a mistake in the development cycle may be described as mistake,: 31 anomaly,: 10  fault,: 31  failure,: 31  error,: 31  exception,: 31  crash,: 22  glitch, bug,: 14  defect, incident,: 39  or side effect. == Examples == Software bugs have been linked to disasters. Software bugs in the Therac-25 radiation therapy machine were directly responsible for patient deaths in the 1980s. In 1996, the European Space Agency's US$1 billion prototype Ariane 5 rocket was destroyed less than a minute after launch due to a bug in the on-board guidance computer program. In 1994, an RAF Chinook helicopter crashed, killing 29; was initially blamed on pilot error, but was later thought to have been caused by a software bug in the engine-control computer. Buggy software caused the early 21st century British Post Office scandal. == Controversy == Sometimes the use of bug to describe the behavior of software is contentious due to perception. Some suggest that the term should be abandoned; contending that bug implies that the defect arose on its own and push to use defect instead since it more clearly indicates they are caused by a human. Some contend that bug may be used to cover up an intentional design decision. In 2011, after receiving scrutiny from US Senator Al Franken for recording and storing users' locations in unencrypted files, Apple called the behavior a bug. However, Justin Brookman of the Center for Democracy and Technology directly challenged that portrayal, stating "I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users." == Prevention == Preventing bugs as early as possible in the software development process is a target of investment and innovation. === Language support === Newer programming languages tend to be designed to prevent common bugs based on vulnerabilities of existing languages. Lessons learned from older languages such as BASIC and C are used to inform the design of later languages such as C# and Rust. A compiled language allows for detecting some typos (such as a misspelled identifier) before runtime which is earlier in the software development process than for an interpreted language. Languages may include features such as a static type system, restricted namespaces and modular programming. For example, for a typed, compiled language (like C): float num = "3"; is syntactically correct, but fails type checking since the right side, a string, cannot be assigned to a float variable. Compilation fails – forcing this defect to be fixed before development progress can resume. With an interpreted language, a failure would not occur until later at runtime. Some languages exclude features that easily lead to bugs, at the expense of slower performance – the principle being that it is usually better to write simpler, slower correct code than complicated, buggy code. For example, the Java does not support pointer arithmetic which is generally fast, but is considered dangerous; relatively likely to cause a major bug. Some languages include features that add runtime overhead in order to prevent some bugs. For example, many languages include runtime bounds checking and a way to handle out-of-bounds conditions instead of crashing. === Techniques === Programming techniques such as programming style and defensive programming are intended to prevent typos. For example, a bug may be caused by a relatively minor typographical error (typo) in the code. For example, this code executes function foo only if conditionis true. if (condition) foo(); But this code always executes foo: if (condition); foo(); A convention that tends to prevent this particular issue is to require braces for a block even if it has just one line. if (condition) { foo(); } Enforcement of conventions may be manual (i.e. via code review) or via automated tools. === Specification === Some contend that writing a program specification, which states the intended behavior of a program, can prevent bugs. Others, however, contend that formal specifications are impractical for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy. === Software testing === One goal of software testing is to find bugs. Measurements during testing can provide an estimate of the number of likely bugs remaining. This becomes more reliable the longer a product is tested and developed. === Agile practices === Agile software development may involve frequent software releases with relatively small changes. Defects are revealed by user feedback. With test-driven development (TDD), unit tests are written while writing the production code, and the production code is not considered complete until all tests complete successfully. === Static analysis === Tools for static code analysis help developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software. === Instrumentation === Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten. === Open source === Open source development allows anyone to examine source code. A school of thought popularized by Eric S. Raymond as Linus's law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow". This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so." An example of an open-source software bug was the 2008 OpenSSL vulnerability in Debian. == Debugging == Debugging can be a significant part of the software development lifecycle. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that “a good part of the remainder of my life was going to be spent in finding errors in my own programs”. A program known as a debugger can help a programmer find faulty code by examining the inner workings of a program such as executing code line-by-line and viewing variable values. As an alternative to using a debugger, code may be instrumented with logic to output debug information to trace program execution and view values. Output is typically to console, window, log file or a hardware output (i.e. LED). Some contend that locating a bug is something of an art. It is not uncommon for a bug in one section of a program to cause failures in a different section, thus making it difficult to track, in an apparently unrelated part of the system. For example, an error in a graphics rendering routine causing a file I/O routine to fail. Sometimes, the most difficult part of debugging is finding the cause of the bug. Once found, correcting the problem is sometimes easy if not trivial. Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmers. Often, such a logic error requires a section of the program to be overhauled or rewritten. Some contend that as a part of code review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such. Typically, the first step in locating a bug is to reproduce it reliably. If unable to reproduce the issue, a programmer cannot find the cause of the bug and therefore cannot fix it. Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are called heisenbugs (humorously named after the Heisenberg uncertainty principle). Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, interest in automated aids to debugging rose, such as static code analysis by abstract interpretation. Often, bugs come about during coding, but faulty design documentation may cause a bug. In some cases, changes to the code may eliminate the problem even though the code then no longer matches the documentation. In an embedded system, the software is often modified to work around a hardware bug since it's cheaper than modifying the hardware. == Management == Bugs are managed via activities like documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code. Tools are often used to track bugs and other issues with software. Typically, different tools are used by the software development team to track their workload than by customer service to track user feedback. A tracked item is often called bug, defect, ticket, issue, feature, or for agile software development, story or epic. Items are often categorized by aspects such as severity, priority and version number. In a process sometimes called triage, choices are made for each bug about whether and when to fix it based on information such as the bug's severity and priority and external factors such as development schedules. Triage generally does not include investigation into cause. Triage may occur regularly. Triage generally consists of reviewing new bugs since the previous triage and maybe all open bugs. Attendees may include project manager, development manager, test manager, build manager, and technical experts. === Severity === Severity is a measure of impact the bug has. This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized, but differ by context such as industry and tracking tool. For example, a crash in a video game has a different impact than a crash in a bank server. Severity levels might be crash or hang, no workaround (user cannot accomplish a task), has workaround (user can still accomplish the task), visual defect (a misspelling for example), or documentation error. Another example set of severities: critical, high, low, blocker, trivial. The severity of a bug may be a separate category to its priority for fixing, or the two may be quantified and managed separately. A bug severe enough to delay the release of the product is called a show stopper. === Priority === Priority describes the importance of resolving the bug in relation to other bugs. Priorities might be numerical, such as 1 through 5, or named, such as critical, high, low, and deferred. The values might be similar or identical to severity ratings, even though priority is a different aspect. Priority may be a combination of the bug's severity with the level of effort to fix. A bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires significantly more effort to fix. === Patch === Bugs of sufficiently high priority may warrant a special release which is sometimes called a patch. === Maintenance release === A software release that emphasizes bug fixes may be called a maintenance release – to differentiate it from a release that emphasizes new features or other changes. === Known issue === It is common practice to release software with known, low-priority bugs or other issues. Possible reasons include but are not limited to: A deadline must be met and resources are insufficient to fix all bugs by the deadline The bug is already fixed in an upcoming release, and it is not of high priority The changes required to fix the bug are too costly or affect too many other components, requiring a major testing activity It may be suspected, or known, that some users are relying on the existing buggy behavior; a proposed fix may introduce a breaking change The problem is in an area that will be obsolete with an upcoming release; fixing it is unnecessary "It's not a bug, it's a feature" A misunderstanding exists between expected and actual behavior or undocumented feature === Implications === The amount and type of damage a software bug may cause affects decision-making, processes and policy regarding software quality. In applications such as human spaceflight, aviation, nuclear power, health care, public transport or automotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application. Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz et al. showed that the median of projects invest 17 percent of the development effort in bug fixing. In 2020, research on GitHub repositories showed the median is 20%. === Cost === In 1994, NASA's Goddard Space Flight Center managed to reduce their average number of errors from 4.5 per 1,000 lines of code (SLOC) down to 1 per 1000 SLOC. Another study in 1990 reported that exceptionally good software development processes can achieve deployment failure rates as low as 0.1 per 1000 SLOC. This figure is iterated in literature such as Code Complete by Steve McConnell, and the NASA study on Flight Software Complexity. Some projects even attained zero defects: the firmware in the IBM Wheelwriter typewriter which consists of 63,000 SLOC, and the Space Shuttle software with 500,000 SLOC. === Benchmark === To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs: the Siemens benchmark ManyBugs is a benchmark of 185 C bugs in nine open-source programs. Defects4J is a benchmark of 341 Java bugs from 5 open-source projects. It contains the corresponding patches, which cover a variety of patch type. == Types == Some notable types of bugs: === Design error === A bug can be caused by insufficient or incorrect design based on the specification. For example, given that the specification is to alphabetize a list of words, a design bug might occur if the design does not account for symbols; resulting in incorrect alphabetization of words with symbols. === Arithmetic === Numerical operations can result in unexpected output, slow processing, or crashing. Such a bug can be from a lack of awareness of the qualities of the data storage such as a loss of precision due to rounding, numerically unstable algorithms, arithmetic overflow and underflow, or from lack of awareness of how calculations are handled by different software coding languages such as division by zero which in some languages may throw an exception, and in others may return a special value such as NaN or infinity. === Control flow === A control flow bug, a.k.a. logic error, is characterized by code that does not fail with an error, but does not have the expected behavior, such as infinite looping, infinite recursion, incorrect comparison in a conditional such as using the wrong comparison operator, and the off-by-one error. === Interfacing === Incorrect API usage. Incorrect protocol implementation. Incorrect hardware handling. Incorrect assumptions of a particular platform. Incompatible systems. A new API or communications protocol may seem to work when two systems use different versions, but errors may occur when a function or feature implemented in one version is changed or missing in another. In production systems which must run continually, shutting down the entire system for a major update may not be possible, such as in the telecommunication industry or the internet. In this case, smaller segments of a large system are upgraded individually, to minimize disruption to a large network. However, some sections could be overlooked and not upgraded, and cause compatibility errors which may be difficult to find and repair. Incorrect code annotations. === Concurrency === Deadlock – a task cannot continue until a second finishes, but at the same time, the second cannot continue until the first finishes. Race condition – multiple simultaneous tasks compete for resources. Errors in critical sections, mutual exclusions and other features of concurrent processing. Time-of-check-to-time-of-use (TOCTOU) is a form of unprotected critical section. === Resourcing === Null pointer dereference. Using an uninitialized variable. Using an otherwise valid instruction on the wrong data type (see packed decimal/binary-coded decimal). Access violations. Resource leaks, where a finite system resource (such as memory or file handles) become exhausted by repeated allocation without release. Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not lead to an access violation or storage violation. These are frequently security bugs. Excessive recursion which—though logically valid—causes stack overflow. Use-after-free error, where a pointer is used after the system has freed the memory it references. Double free error. === Syntax === Use of the wrong token, such as performing assignment instead of equality test. For example, in some languages x=5 will set the value of x to 5 while x==5 will check whether x is currently 5 or some other number. Interpreted languages allow such code to fail. Compiled languages can catch such errors before testing begins. === Teamwork === Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy. Comments out of date or incorrect: many programmers assume the comments accurately describe the code. Differences between documentation and product. == In politics == === "Bugs in the System" report === The Open Technology Institute, run by the group, New America, released a report "Bugs in the System" in August 2016 stating that U.S. policymakers should make reforms to help researchers identify and address software bugs. The report "highlights the need for reform in the field of software vulnerability discovery and disclosure." One of the report's authors said that Congress has not done enough to address cyber software vulnerability, even though Congress has passed a number of bills to combat the larger issue of cyber security. Government researchers, companies, and cyber security experts are the people who typically discover software flaws. The report calls for reforming computer crime and copyright laws. The Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and the Electronic Communications Privacy Act criminalize and create civil penalties for actions that security researchers routinely engage in while conducting legitimate security research, the report said. == In popular culture == In video gaming, the term "glitch" is sometimes used to refer to a software bug. An example is the glitch and unofficial Pokémon species MissingNo. In both the 1968 novel 2001: A Space Odyssey and the corresponding film of the same name, the spaceship's onboard computer, HAL 9000, attempts to kill all its crew members. In the follow-up 1982 novel, 2010: Odyssey Two, and the accompanying 1984 film, 2010: The Year We Make Contact, it is revealed that this action was caused by the computer having been programmed with two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secret from the crew; this conflict caused HAL to become paranoid and eventually homicidal. In the English version of the Nena 1983 song 99 Luftballons (99 Red Balloons) as a result of "bugs in the software", a release of a group of 99 red balloons are mistaken for an enemy nuclear missile launch, requiring an equivalent launch response and resulting in catastrophe. In the 1999 American comedy Office Space, three employees attempt (unsuccessfully) to exploit their company's preoccupation with the Y2K computer bug using a computer virus that sends rounded-off fractions of a penny to their bank account—a long-known technique described as salami slicing. The 2004 novel The Bug, by Ellen Ullman, is about a programmer's attempt to find an elusive bug in a database application. The 2008 Canadian film Control Alt Delete is about a computer programmer at the end of 1999 struggling to fix bugs at his company related to the year 2000 problem. == See also == Anti-pattern Automatic bug fixing Bug bounty program Glitch removal Hardware bug ISO/IEC 9126, which classifies a bug as either a defect or a nonconformity List of software bugs Orthogonal Defect Classification Racetrack problem RISKS Digest Single-event upset Software defect indicator Software regression Software rot VUCA == References == == External links == "Common Weakness Enumeration" – an expert webpage focus on bugs, at NIST.gov BUG type of Jim Gray – another Bug type Picture of the "first computer bug" at the Wayback Machine (archived January 12, 2015) "The First Computer Bug!" – an email from 1981 about Adm. Hopper's bug "Toward Understanding Compiler Bugs in GCC and LLVM". A 2016 study of bugs in compilers
https://en.wikipedia.org/wiki/Software_bug
Attribute-oriented programming (@OP) is a technique for embedding metadata, namely attributes, within program code. == Attribute-oriented programming in various languages == === Java === With the inclusion of Metadata Facility for Java (JSR-175) into the J2SE 5.0 release it is possible to utilize attribute-oriented programming right out of the box. XDoclet library makes it possible to use attribute-oriented programming approach in earlier versions of Java. === C# === The C# language has supported attributes from its very first release. These attributes was used to give run-time information and are not used by a preprocessor. Currently with source generators, you can use attributes to drive generation of additional code at compile-time. === UML === The Unified Modeling Language (UML) supports a kind of attribute called stereotypes. === Hack === The Hack programming language supports attributes. Attributes can be attached to various program entities, and information about those attributes can be retrieved at run-time via reflection. == Tools == Annotation Processing Tool (apt) Spoon, an Annotation-Driven Java Program Transformer XDoclet, a Javadoc-Driven Program Generator == References == == External links == Don Schwarz. Peeking Inside the Box: Attribute-Oriented Programming with Java5 Sun JSR 175 Attributes and Reflection - sample chapter from Programming C# book Modeling Turnpike Project Fraclet Archived 2008-09-20 at the Wayback Machine: An annotation-based programming model for the Fractal component model Attribute Enabled Software Development book
https://en.wikipedia.org/wiki/Attribute-oriented_programming
Jon Louis Bentley (born February 20, 1953) is an American computer scientist who is known for his contributions to computer programming, algorithms and data structure research. == Education == Bentley received a B.S. in mathematical sciences from Stanford University in 1974. At this time he developed his most cited work, the heuristic-based partitioning algorithm k-d tree, published in 1975. He received a M.S. and PhD in 1976 from the University of North Carolina at Chapel Hill. While a student, he also held internships at the Xerox Palo Alto Research Center and Stanford Linear Accelerator Center. == Career == After receiving his Ph.D., he taught programming and computer architecture for six years as member of the faculty at Carnegie Mellon University as an assistant professor of computer science and mathematics. At CMU, his students included Brian Reid, John Ousterhout, Jeff Eppinger, Joshua Bloch, and James Gosling, and he was one of Charles Leiserson's advisors. He published Writing efficient programs in 1982. In 1982, Bentley moved to the Computer Science Research Center at Bell Laboratories, where he was Distinguished Member of the Technical Staff. In this period he developed various languages, continued his algorithm research and developed various software and products for communication systems. He co-authored an optimized Quicksort algorithm with Doug McIlroy. He left Bell Labs in 2001 and worked at Avaya Labs Research until 2013. In this period he developed enterprise communication systems. He found an optimal solution for the two dimensional case of Klee's measure problem: given a set of n rectangles, find the area of their union. He and Thomas Ottmann invented the Bentley–Ottmann algorithm, an efficient algorithm for finding all intersecting pairs among a collection of line segments. He wrote the Programming Pearls column for the Communications of the ACM magazine, and later collected the articles into two books of the same name in 1986 and 1988. Bentley received the Dr. Dobb's Excellence in Programming award in 2004. == Personal life == He is a mountaineer that has climbed over one hundred 4,000 feet high peaks in the north-eastern parts of US. == Bibliography == Programming Pearls, 1986. A second edition appeared in 2016, ISBN 0-201-65788-0. More Programming Pearls: Confessions of a Coder, Prentice-Hall, 1988, ISBN 0-201-11889-0. Writing Efficient Programs, Prentice-Hall, 1982, ISBN 0-13-970244-X. Divide and Conquer Algorithms for Closest Point Problems in Multidimensional Space, Ph.D. thesis. == References ==
https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
This is a list of television programs currently and formerly broadcast by the U.S. cable television channel Discovery Family. == Current programming == === Acquired programming === ==== Animated ==== ==== Preschool ==== === Programming from Cartoon Network/Kids' WB === ==== Cartoon Network Studios ==== ==== Warner Bros. Animation ==== ==== Hanna-Barbera Cartoons ==== ==== Preschool ==== === Programming from Discovery Kids (Latin America) === ==== Preschool ==== === Programming from other WBD networks === == Upcoming programming == === Programming from Discovery Kids (Latin America) === ==== Animated ==== == Former programming == This is a list of programs that have formerly aired on Discovery Kids (1996–present), Hub Network (2010–14), and Discovery Family (since 2014). An asterisk (*) indicates that the program had new episodes aired on Discovery Family. === Original programming === ==== Animated ==== ==== Live-action ==== ==== Preschool ==== === Acquired programming === ==== Animated ==== ==== Live-action ==== ==== Preschool ==== === Programming from Cartoon Network/Kids' WB === ==== Cartoon Network Studios ==== ==== Warner Bros. Animation ==== ==== Hanna-Barbera Cartoons ==== === Programming from other WBD networks === === Programming from PBS Kids === === Acquired programming from Cartoon Network/Kids' WB === ==== Animated ==== === Short-form programming === === Blocks === == Special programming == === Specials === === Films === == See also == List of programs broadcast by Cartoon Network List of programs broadcast by Cartoonito List of programs broadcast by Adult Swim List of programs broadcast by Boomerang List of programs broadcast by Toonami == Notes == == References ==
https://en.wikipedia.org/wiki/List_of_programs_broadcast_by_Discovery_Family
Relativistic programming (RP) is a style of concurrent programming where instead of trying to avoid conflicts between readers and writers (or writers and writers in some cases) the algorithm is designed to tolerate them and get a correct result regardless of the order of events. Also, relativistic programming algorithms are designed to work without the presences of a global order of events. That is, there may be some cases where one thread sees two events in a different order than another thread (hence the term relativistic because in Einstein's theory of special relativity the order of events is not always the same to different viewers). This essentially implies working under causal consistency instead of a stronger model. Relativistic programming provides advantages in performance compared to other concurrency paradigms because it does not require one thread to wait for another nearly as often. Because of this, forms of it (Read-Copy-Update for instance) are now used extensively in the Linux kernel (over 18,000 times as of April 2021 and has grown from nothing to 11.8% of all locking primitives in just under two decades). == See also == Non-blocking algorithm == References == == External links == Relativistic Programming at Portland State University
https://en.wikipedia.org/wiki/Relativistic_programming
ACCU, previously known as the Association of C and C++ Users, is a non-profit user group of people interested in software development, dedicated to raising the standard of computer programming. The ACCU publishes two journals and organizes an annual conference. == History == ACCU was formed in 1987 by Martin Houston. The original name of the organisation was C Users' Group (UK) and this remained the formal name of the organisation until 2011, although it adopted the public name Association of C and C++ Users for the period 1993–2003, and adopted the shorter form ACCU from 2003 onward. As the formal name suggests, the organisation was originally created for people in the United Kingdom. However, the membership is worldwide, predominantly European and North American, but also with members from central and southern America, Australasia, Africa and Asia. Originally, the voluntary association was mainly for C programmers, but it has expanded over time to include all programming languages, especially C++, C#, Java, Perl and Python. == Publications == The ACCU currently publishes two journals: C Vu is a members-only journal which acts as the association's newsletter and carries book reviews, articles on software development and a number of regular columns such as Student Code Critique and Professionalism in Programming. It was edited by Phil Stubbington from its first issue until 1991. Overload aims to carry more in-depth articles aimed at professional software developers. Topics range from programming and design through to process and management. Overload is available online to members and non-members free of charge. Other journals have been published by ACCU in the past. Accent was the news letter of the Silicon Valley chapter and CAUGers was the news letter of the Acorn special interest group. Overload was originally the journal of ACCU's C++ special interest group, but is no longer language-specific. == Local groups == The Silicon Valley chapter organized local meetings in San Jose. Local groups were formed in London, Bristol & Bath, Oxford, Cambridge, North East England, Southern England, York and Zurich. == Conference == The ACCU is operated by a volunteer committee, elected at an Annual General Meeting during the annual conference each Spring which from 1997 to 2012 took place in Oxford, and for the first time in Bristol in 2013. It attracts speakers from the computing community including David Abrahams, Andrei Alexandrescu, Ross J. Anderson, James Coplien, Tom Gilb, Kevlin Henney, Andrew Koenig, Simon Peyton-Jones, Eric S. Raymond, Guido van Rossum, Greg Stein, Bjarne Stroustrup (the designer and original implementor of C++), Herb Sutter and Daveed Vandevoorde. The UK Python Conference, for the Python programming language, originally started out as a track at the ACCU conference. == Standardisation == ACCU supports the standardisation process for computer programming languages. ACCU provided financial sponsorship of meetings in the UK for both the International Organization for Standardization (ISO) C programming language working group and the ISO C++ working groups and helped finance travel to ECMA meetings in mainland Europe. == Mailing lists == The ACCU operates mailing lists, some of which are also open to non-members. These lists allow for general programming-orientated discussions, but also for mentored discussions. Mentored groups have included Effective C++, Python, software patterns, functional programming and XML. They are often based around study of a book. == References == == External links == ACCU Official Site The C Acorn User Group (with back issues of CAUGers) CUG ACCU Silicon Valley Chapter
https://en.wikipedia.org/wiki/ACCU_(organisation)
The Voyager program is an American scientific program that employs two interstellar probes, Voyager 1 and Voyager 2. They were launched in 1977 to take advantage of a favorable planetary alignment to explore the two gas giants Jupiter and Saturn and potentially also the ice giants, Uranus and Neptune—to fly near them while collecting data for transmission back to Earth. After Voyager 1 successfully completed its flyby of Saturn and its moon Titan, it was decided to send Voyager 2 on flybys of Uranus and Neptune. After the planetary flybys were complete, decisions were made to keep the probes in operation to explore interstellar space and the outer regions of the Solar System. On 25 August 2012, data from Voyager 1 indicated that it had entered interstellar space. On 5 November 2019, data from Voyager 2 indicated that it also had entered interstellar space. On 4 November 2019, scientists reported that on 5 November 2018, the Voyager 2 probe had officially reached the interstellar medium (ISM), a region of outer space beyond the influence of the solar wind, as did Voyager 1 in 2012. In August 2018, NASA confirmed, based on results by the New Horizons spacecraft, the existence of a "hydrogen wall" at the outer edges of the Solar System that was first detected in 1992 by the two Voyager spacecraft. As of 2024, the Voyagers are still in operation beyond the outer boundary of the heliosphere in interstellar space. Voyager 1 is moving with a velocity of 61,198 kilometers per hour (38,027 mph), or 17 km/s, (10.5 miles/second) relative to the Sun, and is 24,475,900,000 kilometers (1.52086×1010 mi) from the Sun reaching a distance of 162 AU (24.2 billion km; 15.1 billion mi) from Earth as of May 25, 2024. As of 2024, Voyager 2 is moving with a velocity of 55,347 kilometers per hour (34,391 mph), or 15 km/s, relative to the Sun, and is 20,439,100,000 kilometers (1.27003×1010 mi) from the Sun reaching a distance of 136.627 AU (20.4 billion km; 12.7 billion mi) from Earth as of May 25, 2024. The two Voyagers are the only human-made objects to date that have passed into interstellar space — a record they will hold until at least the 2040s — and Voyager 1 is the farthest human-made object from Earth. == History == === Mariner Jupiter-Saturn === Voyager did things no one predicted, found scenes no one expected, and promises to outlive its inventors. Like a great painting or an abiding institution, it has acquired an existence of its own, a destiny beyond the grasp of its handlers. The two Voyager space probes were originally conceived as part of the Planetary Grand Tour planned during the late 1960s and early 70s that aimed to explore Jupiter, Saturn, Saturn's moon Titan, Uranus, Neptune, and Pluto. The mission originated from the Grand Tour program, conceptualized by Gary Flandro, an aerospace engineer at the Jet Propulsion Laboratory, in 1964, which leveraged a rare planetary alignment occurring once every 175 years. This alignment allowed a craft to reach all outer planets using gravitational assists. The mission was to send several pairs of probes and gained momentum in 1966 when it was endorsed by NASA's Jet Propulsion Laboratory. However, in December 1971, the Grand Tour mission was canceled when funding was redirected to the Space Shuttle program. In 1972, a scaled-down (four planets, two identical spacecraft) mission was proposed, utilizing a spacecraft derived from the Mariner series, initially intended to be Mariner 11 and Mariner 12. The gravity-assist technique, successfully demonstrated by Mariner 10, would be used to achieve significant velocity changes by maneuvering through an intermediate planet's gravitational field to minimize time towards Saturn. The spacecrafts were then moved into a separate program named Mariner Jupiter-Saturn (also Mariner Jupiter-Saturn-Uranus, MJS, or MJSU), part of the Mariner program, later renamed because it was thought that the design of the two space probes had progressed sufficiently beyond that of the Mariner family to merit a separate name. === Voyager probes === On March 4, 1977, NASA announced a competition to rename the mission, believing the existing name was not appropriate as the mission had differed significantly from previous Mariner missions. Voyager was chosen as the new name, referencing an earlier suggestion by William Pickering, who had proposed the name Navigator. Due to the name change occurring close to launch, the probes were still occasionally referred to as Mariner 11 and Mariner 12, or even Voyager 11 and Voyager 12. Two mission trajectories were established: JST aimed at Jupiter, Saturn, and enhancing a Titan flyby, while JSX served as a contingency plan. JST focused on a Titan flyby, while JSX provided a flexible mission plan. If JST succeeded, JSX could proceed with the Grand Tour, but in case of failure, JSX could be redirected for a separate Titan flyby, forfeiting the Grand Tour opportunity. The second probe, now Voyager 2, followed the JSX trajectory, granting it the option to continue on to Uranus and Neptune. Upon Voyager 1 completing its main objectives at Saturn, Voyager 2 received a mission extension, enabling it to proceed to Uranus and Neptune. This allowed Voyager 2 to diverge from the originally planned JST trajectory. The probes would be launched in August or September 1977, with their main objective being to compare the characteristics of Jupiter and Saturn, such as their atmospheres, magnetic fields, particle environments, ring systems, and moons. They would fly by planets and moons in either a JST or JSX trajectory. After completing their flybys, the probes would communicate with Earth, relaying vital data using their magnetometers, spectrometers, and other instruments to detect interstellar, solar, and cosmic radiation. Their radioisotope thermoelectric generators (RTGs) would limit the maximum communication time with the probes to roughly a decade. Following their primary missions, the probes would continue to drift into interstellar space. Voyager 2 was the first to be launched. Its trajectory was designed to allow flybys of Jupiter, Saturn, Uranus, and Neptune. Voyager 1 was launched after Voyager 2, but along a shorter and faster trajectory that was designed to provide an optimal flyby of Saturn's moon Titan, which was known to be quite large and to possess a dense atmosphere. This encounter sent Voyager 1 out of the plane of the ecliptic, ending its planetary science mission. Had Voyager 1 been unable to perform the Titan flyby, the trajectory of Voyager 2 could have been altered to explore Titan, forgoing any visit to Uranus and Neptune. Voyager 1 was not launched on a trajectory that would have allowed it to continue to Uranus and Neptune, but could have continued from Saturn to Pluto without exploring Titan. During the 1990s, Voyager 1 overtook the slower deep-space probes Pioneer 10 and Pioneer 11 to become the most distant human-made object from Earth, a record that it will keep for the foreseeable future. The New Horizons probe, which had a higher launch velocity than Voyager 1, is travelling more slowly due to the extra speed Voyager 1 gained from its flybys of Jupiter and Saturn. Voyager 1 and Pioneer 10 are the most widely separated human-made objects anywhere since they are travelling in roughly opposite directions from the Solar System. In December 2004, Voyager 1 crossed the termination shock, where the solar wind is slowed to subsonic speed, and entered the heliosheath, where the solar wind is compressed and made turbulent due to interactions with the interstellar medium. On 10 December 2007, Voyager 2 also reached the termination shock, about 1.6 billion kilometres (1 billion miles) closer to the Sun than from where Voyager 1 first crossed it, indicating that the Solar System is asymmetrical. In 2010 Voyager 1 reported that the outward velocity of the solar wind had dropped to zero, and scientists predicted it was nearing interstellar space. In 2011, data from the Voyagers determined that the heliosheath is not smooth, but filled with giant magnetic bubbles, theorized to form when the magnetic field of the Sun becomes warped at the edge of the Solar System. In June 2012, Scientists at NASA reported that Voyager 1 was very close to entering interstellar space, indicated by a sharp rise in high-energy particles from outside the Solar System. In September 2013, NASA announced that Voyager 1 had crossed the heliopause on 25 August 2012, making it the first spacecraft to enter interstellar space. In December 2018, NASA announced that Voyager 2 had crossed the heliopause on 5 November 2018, making it the second spacecraft to enter interstellar space. As of 2017 Voyager 1 and Voyager 2 continue to monitor conditions in the outer expanses of the Solar System. The Voyager spacecraft are expected to be able to operate science instruments through 2020, when limited power will require instruments to be deactivated one by one. Sometime around 2025, there will no longer be sufficient power to operate any science instruments. In July 2019, a revised power management plan was implemented to better manage the two probes' dwindling power supply. == Spacecraft design == The Voyager spacecraft each weighed 815 kilograms (1,797 pounds) at launch, but after fuel usage are now about 733 kilograms (1,616 pounds). Of this weight, each spacecraft carries 105 kilograms (231 pounds) of scientific instruments. The identical Voyager spacecraft use three-axis-stabilized guidance systems that use gyroscopic and accelerometer inputs to their attitude control computers to point their high-gain antennas towards the Earth and their scientific instruments towards their targets, sometimes with the help of a movable instrument platform for the smaller instruments and the electronic photography system. The diagram shows the high-gain antenna (HGA) with a 3.7 m (12 ft) diameter dish attached to the hollow decagonal electronics container. There is also a spherical tank that contains the hydrazine monopropellant fuel. The Voyager Golden Record is attached to one of the bus sides. The angled square panel to the right is the optical calibration target and excess heat radiator. The three radioisotope thermoelectric generators (RTGs) are mounted end-to-end on the lower boom. The scan platform comprises: the Infrared Interferometer Spectrometer (IRIS) (largest camera at top right); the Ultraviolet Spectrometer (UVS) just above the IRIS; the two Imaging Science Subsystem (ISS) vidicon cameras to the left of the UVS; and the Photopolarimeter System (PPS) under the ISS. Only five investigation teams are still supported, though data is collected for two additional instruments. The Flight Data Subsystem (FDS) and a single eight-track digital tape recorder (DTR) provide the data handling functions. The FDS configures each instrument and controls instrument operations. It also collects engineering and science data and formats the data for transmission. The DTR is used to record high-rate Plasma Wave Subsystem (PWS) data, which is played back every six months. The Imaging Science Subsystem made up of a wide-angle and a narrow-angle camera is a modified version of the slow scan vidicon camera designs that were used in the earlier Mariner flights. The Imaging Science Subsystem consists of two television-type cameras, each with eight filters in a commandable filter wheel mounted in front of the vidicons. One has a low resolution 200 mm (7.9 in) focal length wide-angle lens with an aperture of f/3 (the wide-angle camera), while the other uses a higher resolution 1,500 mm (59 in) narrow-angle f/8.5 lens (the narrow-angle camera). Three spacecraft were built, Voyager 1 (VGR 77-1), Voyager 2 (VGR 77-3), and test spare model (VGR 77-2). === Scientific instruments === === Computers and data processing === There are three different computer types on the Voyager spacecraft, two of each kind, sometimes used for redundancy. They are proprietary, custom-built computers built from CMOS and TTL medium-scale CMOS integrated circuits and discrete components, mostly from the 7400 series of Texas Instruments. Total number of words among the six computers is about 32K. Voyager 1 and Voyager 2 have identical computer systems. The Computer Command System (CCS), the central controller of the spacecraft, has two 18-bit word, interrupt-type processors with 4096 words each of non-volatile plated-wire memory. During most of the Voyager mission the two CCS computers on each spacecraft were used non-redundantly to increase the command and processing capability of the spacecraft. The CCS is nearly identical to the system flown on the Viking spacecraft. The Flight Data System (FDS) is two 16-bit word machines with modular memories and 8198 words each. The Attitude and Articulation Control System (AACS) is two 18-bit word machines with 4096 words each. Unlike the other on-board instruments, the operation of the cameras for visible light is not autonomous, but rather it is controlled by an imaging parameter table contained in one of the on-board digital computers, the Flight Data Subsystem (FDS). More recent space probes, since about 1990, usually have completely autonomous cameras. The computer command subsystem (CCS) controls the cameras. The CCS contains fixed computer programs such as command decoding, fault detection, and correction routines, antenna-pointing routines, and spacecraft sequencing routines. This computer is an improved version of the one that was used in the Viking orbiter. The hardware in both custom-built CCS subsystems in the Voyagers is identical. There is only a minor software modification for one of them that has a scientific subsystem that the other lacks. According to Guinness Book of Records, CCS holds record of "longest period of continual operation for a computer". It has been running continuously since 20 August 1977. The Attitude and Articulation Control Subsystem (AACS) controls the spacecraft orientation (its attitude). It keeps the high-gain antenna pointing towards the Earth, controls attitude changes, and points the scan platform. The custom-built AACS systems on both craft are identical. It has been erroneously reported on the Internet that the Voyager space probes were controlled by a version of the RCA 1802 (RCA CDP1802 "COSMAC" microprocessor), but such claims are not supported by the primary design documents. The CDP1802 microprocessor was used later in the Galileo space probe, which was designed and built years later. The digital control electronics of the Voyagers were not based on a microprocessor integrated-circuit chip. === Communications === The uplink communications are executed via S-band microwave communications. The downlink communications are carried out by an X-band microwave transmitter on board the spacecraft, with an S-band transmitter as a back-up. All long-range communications to and from the two Voyagers have been carried out using their 3.7-meter (12 ft) high-gain antennas. The high-gain antenna has a beamwidth of 0.5° for X-band, and 2.3° for S-band.: 17  (The low-gain antenna has a 7 dB gain and 60° beamwidth.): 17  Because of the inverse-square law in radio communications, the digital data rates used in the downlinks from the Voyagers have been continually decreasing the farther that they get from the Earth. For example, the data rate used from Jupiter was about 115,000 bits per second. That was halved at the distance of Saturn, and it has gone down continually since then. Some measures were taken on the ground along the way to reduce the effects of the inverse-square law. In between 1982 and 1985, the diameters of the three main parabolic dish antennas of the Deep Space Network were increased from 64 to 70 m (210 to 230 ft): 34  dramatically increasing their areas for gathering weak microwave signals. Whilst the craft were between Saturn and Uranus the onboard software was upgraded to do a degree of image compression and to use a more efficient Reed-Solomon error-correcting encoding.: 33  Then between 1986 and 1989, new techniques were brought into play to combine the signals from multiple antennas on the ground into one, more powerful signal, in a kind of an antenna array.: 34  This was done at Goldstone, California, Canberra (Australia), and Madrid (Spain) using the additional dish antennas available there. Also, in Australia, the Parkes Radio Telescope was brought into the array in time for the fly-by of Neptune in 1989. In the United States, the Very Large Array in New Mexico was brought into temporary use along with the antennas of the Deep Space Network at Goldstone.: 34  Using this new technology of antenna arrays helped to compensate for the immense radio distance from Neptune to the Earth. === Power === Electrical power is supplied by three MHW-RTG radioisotope thermoelectric generators (RTGs). They are powered by plutonium-238 (distinct from the Pu-239 isotope used in nuclear weapons) and provided approximately 470 W at 30 volts DC when the spacecraft was launched. Plutonium-238 decays with a half-life of 87.74 years, so RTGs using Pu-238 will lose a factor of 1−0.5(1/87.74) = 0.79% of their power output per year. In 2011, 34 years after launch, the thermal power generated by such an RTG would be reduced to (1/2)(34/87.74) ≈ 76% of its initial power. The RTG thermocouples, which convert thermal power into electricity, also degrade over time reducing available electric power below this calculated level. By 7 October 2011 the power generated by Voyager 1 and Voyager 2 had dropped to 267.9 W and 269.2 W respectively, about 57% of the power at launch. The level of power output was better than pre-launch predictions based on a conservative thermocouple degradation model. As the electrical power decreases, spacecraft loads must be turned off, eliminating some capabilities. There may be insufficient power for communications by 2032. == Voyager Interstellar Mission == The Voyager primary mission was completed in 1989, with the close flyby of Neptune by Voyager 2. The Voyager Interstellar Mission (VIM) is a mission extension, which began when the two spacecraft had already been in flight for over 12 years. The Heliophysics Division of the NASA Science Mission Directorate conducted a Heliophysics Senior Review in 2008. The panel found that the VIM "is a mission that is absolutely imperative to continue" and that VIM "funding near the optimal level and increased DSN (Deep Space Network) support is warranted." The main objective of the VIM was to extend the exploration of the Solar System beyond the outer planets to the heliopause (the farthest extent at which the Sun's radiation predominates over interstellar winds) and if possible even beyond. Voyager 1 crossed the heliopause boundary in 2012, followed by Voyager 2 in 2018. Passing through the heliopause boundary has allowed both spacecraft to make measurements of the interstellar fields, particles and waves unaffected by the solar wind. Two significant findings so far have been the discovery of a region of magnetic bubbles and no indication of an expected shift in the Solar magnetic field. The entire Voyager 2 scan platform, including all of the platform instruments, was switched off in 1998. All platform instruments on Voyager 1, except for the ultraviolet spectrometer (UVS) have also been switched off. The Voyager 1 scan platform was scheduled to go off-line in late 2000 but has been left on to investigate UV emission from the upwind direction. UVS data are still captured but scans are no longer possible. Gyro operations ended in 2016 for Voyager 2 and in 2017 for Voyager 1. Gyro operations are used to rotate the probe 360 degrees six times per year to measure the magnetic field of the spacecraft, which is then subtracted from the magnetometer science data. On 14 November 2023, Voyager 1 stopped sending all telemetry and data, though the signal was still present. After months of experiments, made considerably more difficult by the 45 hour round trip time, the cause was traced to a bad memory chip. New software was written to avoid the bad memory block, and engineering data resumed on 20 April 2024. Science data from two instruments resumed in May 2024, and full recovery (of all science instruments that were still powered up) was in June 2024. For more details of this intricate operation, see Voyager 1. The two spacecraft continue to operate, with some loss in subsystem redundancy but retain the capability to return scientific data from a full complement of Voyager Interstellar Mission (VIM) science instruments. Both spacecraft also have adequate electrical power and attitude control propellant to continue operating and collecting science data through at least 2026. Though additional science instruments may need to be turned off, the spacecraft are expected to be able to communicate until 2036, in the absence of additional failures. === Mission details === By the start of VIM, Voyager 1 was at a distance of 40 AU from the Earth, while Voyager 2 was at 31 AU. VIM is in three phases: termination shock, heliosheath exploration, and interstellar exploration phase. The spacecraft began VIM in an environment controlled by the Sun's magnetic field, with the plasma particles being dominated by those contained in the expanding supersonic solar wind. This is the characteristic environment of the termination shock phase. At some distance from the Sun, the supersonic solar wind will be held back from further expansion by the interstellar wind. The first feature encountered by a spacecraft as a result of this interaction – between interstellar wind and solar wind – was the termination shock, where the solar wind slows to subsonic speed, and large changes in plasma flow direction and magnetic field orientation occur. Voyager 1 completed the phase of termination shock in December 2004 at a distance of 94 AU, while Voyager 2 completed it in August 2007 at a distance of 84 AU. After entering into the heliosheath, the spacecraft were in an area that is dominated by the Sun's magnetic field and solar wind particles. After passing through the heliosheath, the two Voyagers began the phase of interstellar exploration. The outer boundary of the heliosheath is called the heliopause. This is the region where the Sun's influence begins to decrease and interstellar space can be detected. Voyager 1 is escaping the Solar System at the speed of 3.6 AU per year 35° north of the ecliptic in the general direction of the solar apex in Hercules, while Voyager 2's speed is about 3.3 AU per year, heading 48° south of the ecliptic. The Voyager spacecraft will eventually go on to the stars. In about 40,000 years, Voyager 1 will be within 1.6 light years (ly) of AC+79 3888, also known as Gliese 445, which is approaching the Sun. In 40,000 years Voyager 2 will be within 1.7 ly of Ross 248 (another star which is approaching the Sun), and in 296,000 years it will pass within 4.6 ly of Sirius, which is the brightest star in the night-sky. The spacecraft are not expected to collide with a star for 1 sextillion (1020) years. In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System, as detected by the Voyager space probes. According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose". == Voyager Golden Record == Both spacecraft carry a 12-inch (30 cm) golden phonograph record that contains pictures and sounds of Earth, symbolic directions on the cover for playing the record, and data detailing the location of Earth. The record is intended as a combination time capsule and an interstellar message to any civilization, alien or far-future human, that may recover either of the Voyagers. The contents of this record were selected by a committee that included Timothy Ferris and was chaired by Carl Sagan. == Pale Blue Dot == Pale Blue Dot is a photograph of Earth taken on February 14, 1990, by the Voyager 1 space probe from a distance of approximately 6 billion kilometers (3.7 billion miles, 40.5 AU), as part of that day's Family Portrait series of images of the Solar System. The Voyager program's discoveries during the primary phase of its mission, including new close-up color photos of the major planets, were regularly documented by print and electronic media outlets. Among the best-known of these is an image of the Earth as a Pale Blue Dot, taken in 1990 by Voyager 1, and popularized by Carl Sagan, Consider again that dot. That's here. That's home. That's us....The Earth is a very small stage in a vast cosmic arena.... To my mind, there is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly and compassionately with one another and to preserve and cherish that pale blue dot, the only home we've ever known. == See also == == References == == Further reading == Swift, David W. (1997). Voyager Tales. Reston, Va: American Institute of Aeronautics and Astronautics. ISBN 978-1-56347-252-7. Gallentine, Jay (2009). Ambassadors from Earth: Pioneering Explorations with Unmanned Spacecraft. Lincoln: U of Nebraska Press. ISBN 978-0-8032-2220-5. Pyne, Stephen J. (2010). Voyager: Exploration, Space, and the Third Great Age of Discovery. Penguin Books. ISBN 978-0-14-311959-3. Bell, Jim (2015). The Interstellar Age: Inside the Forty-Year Voyager Mission. Penguin Publishing Group. ISBN 978-0-698-18615-6. == External links == NASA sites NASA Voyager website Voyager Mission status (updated in real time) Voyager Spacecraft Lifetime NASA Facts – Voyager Mission to the Outer Planets Voyager 1 and 2 atlas of six Saturnian satellites, 1984 JPL Voyager Telecom Manual NASA instrument information pages: "Voyager instrument overview". Archived from the original on 21 July 2011. "CRS – COSMIC RAY SUBSYSTEM". Archived from the original on 3 August 2014. Retrieved 11 November 2017. "ISS NA – IMAGING SCIENCE SUBSYSTEM – NARROW ANGLE". NASA. Retrieved 2 April 2023. "ISS WA – IMAGING SCIENCE SUBSYSTEM – WIDE ANGLE". Archived from the original on 18 July 2009. Retrieved 29 October 2009. "IRIS – INFRARED INTERFEROMETER SPECTROMETER AND RADIOMETER". Archived from the original on 18 July 2009. Retrieved 29 October 2009. "LECP – LOW ENERGY CHARGED PARTICLE". Archived from the original on 18 July 2009. Retrieved 29 October 2009. "MAG – TRIAXIAL FLUXGATE MAGNETOMETER". Archived from the original on 18 July 2009. Retrieved 29 October 2009. "PLS – PLASMA SCIENCE EXPERIMENT". Archived from the original on 18 July 2009. Retrieved 29 October 2009. "PPS – PHOTOPOLARIMETER SUBSYSTEM". Archived from the original on 25 August 2009. Retrieved 29 October 2009. "PRA – PLANETARY RADIO ASTRONOMY RECEIVER". Archived from the original on 18 July 2009. Retrieved 29 October 2009. "PWS – PLASMA WAVE RECEIVER". Archived from the original on 18 July 2009. Retrieved 29 October 2009. "RSS – RADIO SCIENCE SUBSYSTEM". Archived from the original on 3 August 2014. Retrieved 11 November 2017. "UVS – ULTRAVIOLET SPECTROMETER". Archived from the original on 3 August 2014. Retrieved 11 November 2017. Non-NASA sites Spacecraft Escaping the Solar System – current positions and diagrams NPR: Science Friday 8/24/07 Interviews for 30th anniversary of Voyager spacecraft Illustrated technical paper by RL Heacock, the project engineer Gray, Meghan. "Voyager and Interstellar Space". Deep Space Videos. Brady Haran. PBS featured documentary The Farthest-Voyager in Space Voyager image album by Kevin M. Gill
https://en.wikipedia.org/wiki/Voyager_program
In optimization theory, semi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In the former case the constraints are typically parameterized. == Mathematical formulation of the problem == The problem can be stated simply as: min x ∈ X f ( x ) {\displaystyle \min _{x\in X}\;\;f(x)} subject to: {\displaystyle {\text{subject to: }}} g ( x , y ) ≤ 0 , ∀ y ∈ Y {\displaystyle g(x,y)\leq 0,\;\;\forall y\in Y} where f : R n → R {\displaystyle f:R^{n}\to R} g : R n × R m → R {\displaystyle g:R^{n}\times R^{m}\to R} X ⊆ R n {\displaystyle X\subseteq R^{n}} Y ⊆ R m . {\displaystyle Y\subseteq R^{m}.} SIP can be seen as a special case of bilevel programs in which the lower-level variables do not participate in the objective function. == Methods for solving the problem == In the meantime, see external links below for a complete tutorial. == Examples == In the meantime, see external links below for a complete tutorial. == See also == Optimization Generalized semi-infinite programming (GSIP) == References == == External links == Description of semi-infinite programming from INFORMS (Institute for Operations Research and Management Science).
https://en.wikipedia.org/wiki/Semi-infinite_programming
A low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture, memory or underlying physical hardware; commands or functions in the language are structurally similar to a processor's instructions. These languages provide the programmer with full control over program memory and the underlying machine code instructions. Because of the low level of abstraction (hence the term "low-level") between the language and machine language, low-level languages are sometimes described as being "close to the hardware". Programs written in low-level languages tend to be relatively non-portable, due to being optimized for a certain type of system architecture. Low-level languages are directly converted to machine code with or without a compiler or interpreter—second-generation programming languages depending on programming language. A program written in a low-level language can be made to run very quickly, with a small memory footprint. Such programs may be architecture dependent or operating system dependent, due to using low level APIs. == Machine code == Machine code is the form in which code that can be directly executed is stored on a computer. It consists of machine language instructions, stored in memory, that perform operations such as moving values in and out of memory locations, arithmetic and Boolean logic, and testing values and, based on the test, either executing the next instruction in memory or executing an instruction at another location. Machine code is usually stored in memory as binary data. Programmers almost never write programs directly in machine code; instead, they write code in assembly language or higher-level programming languages. Although few programs are written in machine languages, programmers often become adept at reading it through working with core dumps or debugging from the front panel. Example of a function in hexadecimal representation of x86-64 machine code to calculate the nth Fibonacci number, with each line corresponding to one instruction: 89 f8 85 ff 74 26 83 ff 02 76 1c 89 f9 ba 01 00 00 00 be 01 00 00 00 8d 04 16 83 f9 02 74 0d 89 d6 ff c9 89 c2 eb f0 b8 01 00 00 c3 == Assembly language == Second-generation languages provide one abstraction level on top of the machine code. In the early days of coding on computers like TX-0 and PDP-1, the first thing MIT hackers did was to write assemblers. Assembly language has little semantics or formal specification, being only a mapping of human-readable symbols, including symbolic addresses, to opcodes, addresses, numeric constants, strings and so on. Typically, one machine instruction is represented as one line of assembly code, commonly called a mnemonic. Assemblers produce object files that can link with other object files or be loaded on their own. Most assemblers provide macros to generate common sequences of instructions. Example: The same Fibonacci number calculator as above, but in x86-64 assembly language using Intel syntax: In this code example, the registers of the x86-64 processor are named and manipulated directly. The function loads its 64-bit argument from rdi in accordance to the System V application binary interface for x86-64 and performs its calculation by manipulating values in the rax, rcx, rsi, and rdi registers until it has finished and returns. Note that in this assembly language, there is no concept of returning a value. The result having been stored in the rax register, again in accordance with System V application binary interface, the ret instruction simply removes the top 64-bit element on the stack and causes the next instruction to be fetched from that location (that instruction is usually the instruction immediately after the one that called this function), with the result of the function being stored in rax. x86-64 assembly language imposes no standard for passing values to a function or returning values from a function (and in fact, has no concept of a function); those are defined by an application binary interface (ABI), such as the System V ABI for a particular instruction set. Compare this with the same function in C: This code is similar in structure to the assembly language example but there are significant differences in terms of abstraction: The input (parameter n) is an abstraction that does not specify any storage location on the hardware. In practice, the C compiler follows one of many possible calling conventions to determine a storage location for the input. The local variables f_nminus2, f_nminus1, and f_n are abstractions that do not specify any specific storage location on the hardware. The C compiler decides how to actually store them for the target architecture. The return function specifies the value to return, but does not dictate how it is returned. The C compiler for any specific architecture implements a standard mechanism for returning the value. Compilers for the x86-64 architecture typically (but not always) use the rax register to return a value, as in the assembly language example (the author of the assembly language example has chosen to use the System V application binary interface for x86-64 convention but assembly language does not require this). These abstractions make the C code compilable without modification on any architecture for which a C compiler has been written, whereas the assembly language code above will only run on processors using the x86-64 architecture. == C programming language == C has variously been described as low-level and high-level. Traditionally considered high-level, C’s level of abstraction from the hardware is far lower than many subsequently developed languages, particularly interpreted languages. The direct interface C provides between the programmer and hardware memory allocation and management make C the lowest-level language of the 10 most popular languages currently in use. C is architecture independent — the same C code may, in most cases, be compiled (by different machine-specific compilers) for use on a wide breadth of machine platforms. In many respects (including directory operations and memory allocation), C provides “an interface to system-dependent objects that is itself relatively system independent”. This feature is considered “high-level” in comparison of platform-specific assembly languages. == Low-level programming in high-level languages == During the late 1960s and 1970s, high-level languages that included some degree of access to low-level programming functions, such as PL/S, BLISS, BCPL, extended ALGOL and NEWP (for Burroughs large systems/Unisys Clearpath MCP systems), and C, were introduced. One method for this is inline assembly, in which assembly code is embedded in a high-level language that supports this feature. Some of these languages also allow architecture-dependent compiler optimization directives to adjust the way a compiler uses the target processor architecture. Furthermore, as referenced above, the following block of C is from the GNU Compiler and shows the inline assembly ability of C. Per the GCC documentation this is a simple copy and addition code. This code displays the interaction between a generally high level language like C and its middle/low level counter part Assembly. Although this may not make C a natively low level language these facilities express the interactions in a more direct way. == References == == Bibliography == Zhirkov, Igor (2017). Low-level programming: C, assembly, and program execution on Intel 64 architecture. California: Apress. ISBN 978-1-4842-2402-1.
https://en.wikipedia.org/wiki/Low-level_programming_language
In computer science, programming by example (PbE), also termed programming by demonstration or more generally as demonstrational programming, is an end-user development technique for teaching a computer new behavior by demonstrating actions on concrete examples. The system records user actions and infers a generalized program that can be used on new examples. PbE is intended to be easier to do than traditional computer programming, which generally requires learning and using a programming language. Many PbE systems have been developed as research prototypes, but few have found widespread real-world application. More recently, PbE has proved to be a useful paradigm for creating scientific work-flows. PbE is used in two independent clients for the BioMOBY protocol: Seahawk and Gbrowse moby. Also the programming by demonstration (PbD) term has been mostly adopted by robotics researchers for teaching new behaviors to the robot through a physical demonstration of the task. The usual distinction in literature between these terms is that in PbE the user gives a prototypical product of the computer execution, such as a row in the desired results of a query; while in PbD the user performs a sequence of actions that the computer must repeat, generalizing it to be used in different data sets. For final users, to automate a workflow in a complex tool (e.g. Photoshop), the most simple case of PbD is the macro recorder. == See also == Query by Example Automated machine learning Example-based machine translation Inductive programming Lapis (text editor), which allows simultaneous editing of similar items in multiple selections created by example Programming by demonstration Test-driven development == References == == External links == Henry Lieberman's page on Programming by Example Online copy of Watch What I Do, Allen Cypher's book on Programming by Demonstration Online copy of Your Wish is My Command, Henry Lieberman's sequel to Watch What I Do A Visual Language for Data Mapping, John Carlson's description of an Integrated Development Environment (IDE) that used Programming by Example (desktop objects) for data mapping, and an iconic language for recording operations
https://en.wikipedia.org/wiki/Programming_by_example
Multi-stage programming (MSP) is a variety of metaprogramming in which compilation is divided into a series of intermediate phases, allowing typesafe run-time code generation. Statically defined types are used to verify that dynamically constructed types are valid and do not violate the type system. In MSP languages, expressions are qualified by notation that specifies the phase at which they are to be evaluated. By allowing the specialization of a program at run-time, MSP can optimize the performance of programs: it can be considered as a form of partial evaluation that performs computations at compile-time as a trade-off to increase the speed of run-time processing. Multi-stage programming languages support constructs similar to the Lisp construct of quotation and eval, except that scoping rules are taken into account. == References == == External links == MetaOCaml
https://en.wikipedia.org/wiki/Multi-stage_programming
Functional reactive programming (FRP) is a programming paradigm for reactive programming (asynchronous dataflow programming) using the building blocks of functional programming (e.g., map, reduce, filter). FRP has been used for programming graphical user interfaces (GUIs), robotics, games, and music, aiming to simplify these problems by explicitly modeling time. == Formulations of FRP == The original formulation of functional reactive programming can be found in the ICFP 97 paper Functional Reactive Animation by Conal Elliott and Paul Hudak. FRP has taken many forms since its introduction in 1997. One axis of diversity is discrete vs. continuous semantics. Another axis is how FRP systems can be changed dynamically. === Continuous === The earliest formulation of FRP used continuous semantics, aiming to abstract over many operational details that are not important to the meaning of a program. The key properties of this formulation are: Modeling values that vary over continuous time, called "behaviors" and later "signals". Modeling "events" which have occurrences at discrete points in time. The system can be changed in response to events, generally termed "switching." The separation of evaluation details such as sampling rate from the reactive model. This semantic model of FRP in side-effect free languages is typically in terms of continuous functions, and typically over time. This formulation is also referred to as denotative continuous time programming (DCTP). === Discrete === Formulations such as Event-Driven FRP and versions of Elm prior to 0.17 require that updates are discrete and event-driven. These formulations have pushed for practical FRP, focusing on semantics that have a simple API that can be implemented efficiently in a setting such as robotics or in a web-browser. In these formulations, it is common that the ideas of behaviors and events are combined into signals that always have a current value, but change discretely. == Interactive FRP == It has been pointed out that the ordinary FRP model, from inputs to outputs, is poorly suited to interactive programs. Lacking the ability to "run" programs within a mapping from inputs to outputs may mean one of the following solutions must be used: Create a data structure of actions which appear as the outputs. The actions must be run by an external interpreter or environment. This inherits all of the difficulties of the original stream input/output (I/O) system of Haskell. Use Arrowized FRP and embed arrows which are capable of performing actions. The actions may also have identities, which allows them to maintain separate mutable stores for example. This is the approach taken by the Fudgets library and, more generally, Monadic Stream Functions. The novel approach is to allow actions to be run now (in the IO monad) but defer the receipt of their results until later. This makes use of an interaction between the Event and IO monads, and is compatible with a more expression-oriented FRP: == Implementation issues == There are two types of FRP systems, push-based and pull-based. Push-based systems take events and push them through a signal network to achieve a result. Pull-based systems wait until the result is demanded, and work backwards through the network to retrieve the value demanded. Some FRP systems such as Yampa use sampling, where samples are pulled by the signal network. This approach has a drawback: the network must wait up to the duration of one computation step to learn of changes to the input. Sampling is an example of pull-based FRP. The Reactive and Etage libraries on Hackage introduced an approach called push-pull FRP. In it, only when the next event on a purely defined stream (such as a list of fixed events with times) is demanded, that event is constructed. These purely defined streams act like lazy lists in Haskell. That is the pull-based half. The push-based half is used when events external to the system are brought in. The external events are pushed to consumers, so that they can find out about an event the instant it is issued. == Implementations == Implementations exist for many programming languages, including: Yampa is an arrowized, efficient, pure Haskell implementation with SDL, SDL2, OpenGL and HTML DOM support. The language Elm used to support FRP but has since replaced it with a different pattern. reflex is an efficient push–pull FRP implementation in Haskell with hosts for web browser – Document Object Model (DOM), Simple DirectMedia Layer (SDL), and Gloss. reactive-banana is a target-agnostic push FRP implementation in Haskell. netwire and varying are arrowized, pull FRP implementations in Haskell. Flapjax is a behavior–event FRP implementation in JavaScript. React is an OCaml module for functional reactive programming. Sodium is a push FRP implementation independent of a specific user interface (UI) framework for several languages, such as Java, TypeScript, and C#. Dunai is a fast implementation in Haskell using Monadic Stream Functions that supports Classic and Arrowized FRP. ObservableComputations, a cross-platform .NET implementation. Stella is an actor model-based reactive language that demonstrates a model of "actors" and "reactors" which aims to avoid the issues of combining imperative code with reactive code (by separating them in actors and reactors). Actors are suitable for use in distributed reactive systems. TidalCycles is a pure FRP domain specific language for musical pattern, embedded in the Haskell language. ReactiveX, popularized by its JavaScript implementation rxjs, is functional and reactive but differs from functional reactive programming. == See also == Incremental computing Stream processing == References ==
https://en.wikipedia.org/wiki/Functional_reactive_programming
Haskell () is a general-purpose, statically typed, purely functional programming language with type inference and lazy evaluation. Designed for teaching, research, and industrial applications, Haskell pioneered several programming language features such as type classes, which enable type-safe operator overloading, and monadic input/output (IO). It is named after logician Haskell Curry. Haskell's main implementation is the Glasgow Haskell Compiler (GHC). Haskell's semantics are historically based on those of the Miranda programming language, which served to focus the efforts of the initial Haskell working group. The last formal specification of the language was made in July 2010, while the development of GHC continues to expand Haskell via language extensions. Haskell is used in academia and industry. As of May 2021, Haskell was the 28th most popular programming language by Google searches for tutorials, and made up less than 1% of active users on the GitHub source code repository. == History == After the release of Miranda by Research Software Ltd. in 1985, interest in lazy functional languages grew. By 1987, more than a dozen non-strict, purely functional programming languages existed. Miranda was the most widely used, but it was proprietary software. At the conference on Functional Programming Languages and Computer Architecture (FPCA '87) in Portland, Oregon, there was a strong consensus that a committee be formed to define an open standard for such languages. The committee's purpose was to consolidate existing functional languages into a common one to serve as a basis for future research in functional-language design. === Haskell 1.0 to 1.4 === Haskell was developed by a committee, attempting to bring together off the shelf solutions where possible. Type classes, which enable type-safe operator overloading, were first proposed by Philip Wadler and Stephen Blott to address the ad-hoc handling of equality types and arithmetic overloading in languages at the time. In early versions of Haskell up until and including version 1.2, user interaction and input/output (IO) were handled by both streams based and continuation based mechanisms which were widely considered unsatisfactory. In version 1.3, monadic IO was introduced, along with the generalisation of type classes to higher kinds (type constructors). Along with "do notation", which provides syntactic sugar for the Monad type class, this gave Haskell an effect system that maintained referential transparency and was convenient. Other notable changes in early versions were the approach to the 'seq' function, which creates a data dependency between values, and is used in lazy languages to avoid excessive memory consumption; with it moving from a type class to a standard function to make refactoring more practical. The first version of Haskell ("Haskell 1.0") was defined in 1990. The committee's efforts resulted in a series of language definitions (1.0, 1.1, 1.2, 1.3, 1.4). === Haskell 98 === In late 1997, the series culminated in Haskell 98, intended to specify a stable, minimal, portable version of the language and an accompanying standard library for teaching, and as a base for future extensions. The committee expressly welcomed creating extensions and variants of Haskell 98 via adding and incorporating experimental features. In February 1999, the Haskell 98 language standard was originally published as The Haskell 98 Report. In January 2003, a revised version was published as Haskell 98 Language and Libraries: The Revised Report. The language continues to evolve rapidly, with the Glasgow Haskell Compiler (GHC) implementation representing the current de facto standard. === Haskell 2010 === In early 2006, the process of defining a successor to the Haskell 98 standard, informally named Haskell Prime, began. This was intended to be an ongoing incremental process to revise the language definition, producing a new revision up to once per year. The first revision, named Haskell 2010, was announced in November 2009 and published in July 2010. Haskell 2010 is an incremental update to the language, mostly incorporating several well-used and uncontroversial features previously enabled via compiler-specific flags. Hierarchical module names. Module names are allowed to consist of dot-separated sequences of capitalized identifiers, rather than only one such identifier. This lets modules be named in a hierarchical manner (e.g., Data.List instead of List), although technically modules are still in a single monolithic namespace. This extension was specified in an addendum to Haskell 98 and was in practice universally used. The foreign function interface (FFI) allows bindings to other programming languages. Only bindings to C are specified in the Report, but the design allows for other language bindings. To support this, data type declarations were permitted to contain no constructors, enabling robust nonce types for foreign data that could not be constructed in Haskell. This extension was also previously specified in an Addendum to the Haskell 98 Report and widely used. So-called n+k patterns (definitions of the form fact (n+1) = (n+1) * fact n) were no longer allowed. This syntactic sugar had misleading semantics, in which the code looked like it used the (+) operator, but in fact desugared to code using (-) and (>=). The rules of type inference were relaxed to allow more programs to type check. Some syntax issues (changes in the formal grammar) were fixed: pattern guards were added, allowing pattern matching within guards; resolution of operator fixity was specified in a simpler way that reflected actual practice; an edge case in the interaction of the language's lexical syntax of operators and comments was addressed, and the interaction of do-notation and if-then-else was tweaked to eliminate unexpected syntax errors. The LANGUAGE pragma was specified. By 2010, dozens of extensions to the language were in wide use, and GHC (among other compilers) provided the LANGUAGE pragma to specify individual extensions with a list of identifiers. Haskell 2010 compilers are required to support the Haskell2010 extension and are encouraged to support several others, which correspond to extensions added in Haskell 2010. === Future standards === The next formal specification had been planned for 2020. On 29 October 2021, with GHC version 9.2.1, the GHC2021 extension was released. While this is not a formal language spec, it combines several stable, widely-used GHC extensions to Haskell 2010. == Features == Haskell features lazy evaluation, lambda expressions, pattern matching, list comprehension, type classes and type polymorphism. It is a purely functional programming language, which means that functions generally have no side effects. A distinct construct exists to represent side effects, orthogonal to the type of functions. A pure function can return a side effect that is subsequently executed, modeling the impure functions of other languages. Haskell has a strong, static type system based on Hindley–Milner type inference. Its principal innovation in this area is type classes, originally conceived as a principled way to add overloading to the language, but since finding many more uses. The construct that represents side effects is an example of a monad: a general framework which can model various computations such as error handling, nondeterminism, parsing and software transactional memory. They are defined as ordinary datatypes, but Haskell provides some syntactic sugar for their use. Haskell has an open, published specification, and multiple implementations exist. Its main implementation, the Glasgow Haskell Compiler (GHC), is both an interpreter and native-code compiler that runs on most platforms. GHC is noted for its rich type system incorporating recent innovations such as generalized algebraic data types and type families. The Computer Language Benchmarks Game also highlights its high-performance implementation of concurrency and parallelism. An active, growing community exists around the language, and more than 5,400 third-party open-source libraries and tools are available in the online package repository Hackage. == Code examples == A "Hello, World!" program in Haskell (only the last line is strictly necessary): The factorial function in Haskell, defined in a few different ways (the first line is the type annotation, which is optional and is the same for each implementation): Using Haskell's Fixed-point combinator allows this function to be written without any explicit recursion. As the Integer type has arbitrary-precision, this code will compute values such as factorial 100000 (a 456,574-digit number), with no loss of precision. An implementation of an algorithm similar to quick sort over lists, where the first element is taken as the pivot: == Implementations == All listed implementations are distributed under open source licenses. Implementations that fully or nearly comply with the Haskell 98 standard include: The Glasgow Haskell Compiler (GHC) compiles to native code on many different processor architectures, and to ANSI C, via one of two intermediate languages: C--, or in more recent versions, LLVM (formerly Low Level Virtual Machine) bitcode. GHC has become the de facto standard Haskell dialect. There are libraries (e.g., bindings to OpenGL) that work only with GHC. GHC was also distributed with the Haskell platform. GHC features an asynchronous runtime that also schedules threads across multiple CPU cores similar to the Go runtime. Jhc, a Haskell compiler written by John Meacham, emphasizes speed and efficiency of generated programs and exploring new program transformations. Ajhc is a fork of Jhc. The Utrecht Haskell Compiler (UHC) is a Haskell implementation from Utrecht University. It supports almost all Haskell 98 features plus many experimental extensions. It is implemented using attribute grammars and is primarily used for research on generated type systems and language extensions. Implementations no longer actively maintained include: The Haskell User's Gofer System (Hugs) is a bytecode interpreter. It was once one of the implementations used most widely, alongside the GHC compiler, but has now been mostly replaced by GHCi. It also comes with a graphics library. HBC is an early implementation supporting Haskell 1.4. It was implemented by Lennart Augustsson in, and based on, Lazy ML. It has not been actively developed for some time. nhc98 is a bytecode compiler focusing on minimizing memory use. The York Haskell Compiler (Yhc) was a fork of nhc98, with the goals of being simpler, more portable and efficient, and integrating support for Hat, the Haskell tracer. It also had a JavaScript backend, allowing users to run Haskell programs in web browsers. Implementations not fully Haskell 98 compliant, and using a variant Haskell language, include: Eta and Frege are dialects of Haskell targeting the Java virtual machine. Gofer is an educational dialect of Haskell, with a feature called constructor classes, developed by Mark Jones. It is supplanted by Haskell User's Gofer System (Hugs). Helium, a newer dialect of Haskell. The focus is on making learning easier via clearer error messages by disabling type classes as a default. == Notable applications == Agda is a proof assistant written in Haskell. Cabal is a tool for building and packaging Haskell libraries and programs. Darcs is a revision control system written in Haskell, with several innovative features, such as more precise control of patches to apply. Glasgow Haskell Compiler (GHC) is also often a testbed for advanced functional programming features and optimizations in other programming languages. Git-annex is a tool to manage (big) data files under Git version control. It also provides a distributed file synchronization system (git-annex assistant). Linspire Linux chose Haskell for system tools development. Pandoc is a tool to convert one markup format into another. Pugs is a compiler and interpreter for the programming language then named Perl 6, but since renamed Raku. TidalCycles is a domain special language for live coding musical patterns, embedded in Haskell. Xmonad is a window manager for the X Window System, written fully in Haskell. GarganText is a collaborative tool to map through semantic analysis texts on any web browser, written fully in Haskell and PureScript, which is used for instance in the research community to draw up state-of-the-art reports and roadmaps. === Industry === Bluespec SystemVerilog (BSV) is a language extension of Haskell, for designing electronics. It is an example of a domain-specific language embedded into Haskell. Further, Bluespec, Inc.'s tools are implemented in Haskell. Cryptol, a language and toolchain for developing and verifying cryptography algorithms, is implemented in Haskell. Facebook implements its anti-spam programs in Haskell, maintaining the underlying data access library as open-source software. The Cardano blockchain platform is implemented in Haskell. GitHub implemented Semantic, an open-source library for analysis, diffing, and interpretation of untrusted source code, in Haskell. Standard Chartered's financial modelling language Mu is syntactic Haskell running on a strict runtime. seL4, the first formally verified microkernel, used Haskell as a prototyping language for the OS developer.: p.2  At the same time, the Haskell code defined an executable specification with which to reason, for automatic translation by the theorem-proving tool.: p.3  The Haskell code thus served as an intermediate prototype before final C refinement.: p.3  Target stores' supply chain optimization software is written in Haskell. Co–Star Mercury Technologies' back end is written in Haskell. === Web === Notable web frameworks written for Haskell include: IHP Servant Snap Yesod == Criticism == Jan-Willem Maessen, in 2002, and Simon Peyton Jones, in 2003, discussed problems associated with lazy evaluation while also acknowledging the theoretical motives for it. In addition to purely practical considerations such as improved performance, they note that lazy evaluation makes it more difficult for programmers to reason about the performance of their code (particularly its space use). Bastiaan Heeren, Daan Leijen, and Arjan van IJzendoorn in 2003 also observed some stumbling blocks for Haskell learners: "The subtle syntax and sophisticated type system of Haskell are a double edged sword—highly appreciated by experienced programmers but also a source of frustration among beginners, since the generality of Haskell often leads to cryptic error messages." To address the error messages researchers from Utrecht University developed an advanced interpreter called Helium, which improved the user-friendliness of error messages by limiting the generality of some Haskell features. In particular it disables type classes by default. Ben Lippmeier designed Disciple as a strict-by-default (lazy by explicit annotation) dialect of Haskell with a type-and-effect system, to address Haskell's difficulties in reasoning about lazy evaluation and in using traditional data structures such as mutable arrays. He argues (p. 20) that "destructive update furnishes the programmer with two important and powerful tools ... a set of efficient array-like data structures for managing collections of objects, and ... the ability to broadcast a new value to all parts of a program with minimal burden on the programmer." Robert Harper, one of the authors of Standard ML, has given his reasons for not using Haskell to teach introductory programming. Among these are the difficulty of reasoning about resource use with non-strict evaluation, that lazy evaluation complicates the definition of datatypes and inductive reasoning, and the "inferiority" of Haskell's (old) class system compared to ML's module system. Haskell's build tool, Cabal, has historically been criticized for poorly handling multiple versions of the same library, a problem known as "Cabal hell". The Stackage server and Stack build tool were made in response to these criticisms. Cabal itself has since addressed this problem by borrowing ideas from Nix, with the new approach becoming the default in 2019. == Related languages == Clean is a close, slightly older relative of Haskell. Its biggest deviation from Haskell is in the use of uniqueness types instead of monads for input/output (I/O) and side effects. A series of languages inspired by Haskell, but with different type systems, have been developed, including: Agda, a functional language with dependent types. Cayenne, with dependent types. Elm, a functional language to create web front-end apps, no support for user-defined or higher-kinded type classes or instances. Epigram, a functional language with dependent types suitable for proving properties of programs. Idris, a general purpose functional language with dependent types, developed at the University of St Andrews. PureScript transpiles to JavaScript. Ωmega, a strict language that allows introduction of new kinds, and programming at the type level. Other related languages include: Curry, a functional/logic programming language based on Haskell. Notable Haskell variants include: Generic Haskell, a version of Haskell with type system support for generic programming. Hume, a strict functional language for embedded systems based on processes as stateless automata over a sort of tuples of one element mailbox channels where the state is kept by feedback into the mailboxes, and a mapping description from outputs to channels as box wiring, with a Haskell-like expression language and syntax. == Conferences and workshops == The Haskell community meets regularly for research and development activities. The main events are: International Conference on Functional Programming (ICFP) Haskell Symposium (formerly the Haskell Workshop) Haskell Implementors Workshop Commercial Users of Functional Programming (CUFP) ZuriHac, kind of Hackathon held every year in Zurich Starting in 2006, a series of organized hackathons has occurred, the Hac series, aimed at improving the programming language tools and libraries. == References == == Bibliography == Reports Peyton Jones, Simon, ed. (2003). Haskell 98 Language and Libraries: The Revised Report. Cambridge University Press. ISBN 978-0521826143. Marlow, Simon, ed. (2010). Haskell 2010 Language Report (PDF). Haskell.org. Textbooks Davie, Antony (1992). An Introduction to Functional Programming Systems Using Haskell. Cambridge University Press. ISBN 978-0-521-25830-2. Bird, Richard (1998). Introduction to Functional Programming using Haskell (2nd ed.). Prentice Hall Press. ISBN 978-0-13-484346-9. Hudak, Paul (2000). The Haskell School of Expression: Learning Functional Programming through Multimedia. New York: Cambridge University Press. ISBN 978-0521643382. Hutton, Graham (2007). Programming in Haskell. Cambridge University Press. ISBN 978-0521692694. O'Sullivan, Bryan; Stewart, Don; Goerzen, John (2008). Real World Haskell. Sebastopol: O'Reilly. ISBN 978-0-596-51498-3. Real World Haskell (full text). Thompson, Simon (2011). Haskell: The Craft of Functional Programming (3rd ed.). Addison-Wesley. ISBN 978-0201882957. Lipovača, Miran (April 2011). Learn You a Haskell for Great Good!. San Francisco: No Starch Press. ISBN 978-1-59327-283-8. (full text) Bird, Richard (2014). Thinking Functionally with Haskell. Cambridge University Press. ISBN 978-1-107-45264-0. Bird, Richard; Gibbons, Jeremy (July 2020). Algorithm Design with Haskell. Cambridge University Press. ISBN 978-1-108-49161-7. Tutorials Hudak, Paul; Peterson, John; Fasel, Joseph (June 2000). "A Gentle Introduction To Haskell, Version 98". Haskell.org. Learn You a Haskell for Great Good! - A community version (learnyouahaskell.github.io). An up-to-date community maintained version of the renowned "Learn You a Haskell" (LYAH) guide. Daumé, Hal III. Yet Another Haskell Tutorial (PDF). Assumes far less prior knowledge than official tutorial. Yorgey, Brent (12 March 2009). "The Typeclassopedia" (PDF). The Monad.Reader (13): 17–68. Maguire, Sandy (2018). Thinking with Types: Type-Level Programming in Haskell. History Hudak, Paul; Hughes, John; Peyton Jones, Simon; Wadler, Philip (2007). "A history of Haskell" (PDF). Proceedings of the third ACM SIGPLAN conference on History of programming languages. pp. 12–1–55. doi:10.1145/1238844.1238856. ISBN 978-1-59593-766-7. S2CID 52847907. Hamilton, Naomi (19 September 2008). "The A-Z of Programming Languages: Haskell". Computerworld. == External links == Official website
https://en.wikipedia.org/wiki/Haskell
This is the list of original programming currently and formerly broadcast by the Indian television channel Sony Entertainment Television (SET) in India. == Current broadcasts == == Former broadcasts == === Acquired series === === Anthology series === === Comedy series === === Drama series === === Horror/supernatural series === === Mythological series === === Reality/non-scripted programming === === Hindi dubbed shows === ==== Animated series ==== === Specials === == See also == List of programmes broadcast by Sony SAB Sony Pal == References ==
https://en.wikipedia.org/wiki/List_of_programs_broadcast_by_Sony_Entertainment_Television
Nickelodeon (nicknamed Nick) is an American pay television channel and the flagship property of the Nickelodeon Group, a subdivision of the Paramount Media Networks division of Paramount Global. It was launched on April 1, 1979, as the first cable channel for children. It is primarily aimed at children and adolescents aged 2 to 17, along with a broader family audience through its programming blocks. The channel began as a test broadcast on December 1, 1977, as part of QUBE, an early cable television system broadcast locally in Columbus, Ohio. On April 1, 1979, the channel was renamed Nickelodeon and launched to a new nationwide audience, with Pinwheel as its inaugural program. The network was initially commercial-free and remained without advertising until 1984. Nickelodeon gained a rebranding in programming and image that year, and its ensuing success led to it and its sister networks MTV and VH1 being sold to Viacom in 1985. The Nickelodeon franchise has expanded via several sister channels and program blocks. Nick Jr. launched as preschool morning block on January 4, 1988, and was eventually spun-off into the Nick Jr. Channel in 2009. Nicktoons, based on the flagship brand for original animated series, launched as a standalone channel in 2002. Noggin, an interactive educational brand created in partnership with Sesame Workshop, existed as a channel from 1999 to 2009 and a mobile streaming service from 2015 to 2024. Two blocks aimed at teenage audiences, Nickelodeon's TEENick and Noggin's The N, were merged to form the TeenNick channel in 2009. As of December 2023, Nickelodeon was available to approximately 70 million pay television households in the United States, down from its peak of 101 million households in 2011. == History == The channel's name comes from the first five-cent movie theaters called nickelodeons. Its history dates back to December 1, 1977, when Warner Cable Communications launched the first 2-way interactive cable system, QUBE, in Columbus, Ohio. The C-3 cable channel carried Pinwheel daily from 7:00 a.m. to 9:00 p.m. Eastern Time, and the channel was labelled "Pinwheel" on remote controllers, as it was the only program broadcast. Initially scheduled for a February 1979 launch, Nickelodeon launched on April 1, 1979, initially distributed to Warner Cable systems via satellite on the RCA Satcom-1 transponder. Originally commercial-free, advertising was introduced in January 1984. == Programming == Programming seen on Nickelodeon includes animated series (such as SpongeBob SquarePants, The Loud House, Middlemost Post, The Patrick Star Show, Kamp Koral: SpongeBob's Under Years, The Smurfs, Rugrats and Monster High), live-action, scripted series (such as Danger Force, Tyler Perry's Young Dylan and That Girl Lay Lay), and original made-for-TV movies, while the network's daytime schedule is dedicated to shows targeting preschoolers (such as Bubble Guppies, Paw Patrol, and Blue's Clues & You!). A re-occurring program was bi-monthly special editions of Nick News with Linda Ellerbee, a news magazine series aimed at children that debuted in 1992 as a weekly series and ended in 2015. In June 2020, Nickelodeon announced that they would bring back Nick News in a series of hour-long specials. The first installment, Kids, Race and Unity: A Nick News Special premiered on June 29, 2020, and was hosted by R&B musician Alicia Keys. Since 2021, Nickelodeon has aired at least one live National Football League game a year, produced by corporate sibling CBS Sports and incorporating elements unique to Nickelodeon into the broadcast such as green slime in the end zone and SpongeBob SquarePants' face superimposed on the netting of the goalposts. Nickelodeon also carries the weekly shoulder program NFL Slimetime during the season which includes similar graphics. Nickelodeon offered the first alternate broadcast of a Super Bowl in 2024 when it aired a SpongeBob SquarePants-themed simulcast of CBS' coverage. === Nicktoons === Nicktoons is the branding for Nickelodeon's original animated television series. Until 1991, the animated series that aired on Nickelodeon were largely imported from foreign countries, with some original animated specials that were also featured on the channel up to that point. Though the Nicktoons branding has infrequently been used by the network itself since the 2002 launch of the channel of the same name, original animated series continue to make up a substantial portion of Nickelodeon's lineup. Roughly, six to seven hours of these programs are seen on the weekday schedule, and around nine hours on weekends, including a dedicated weekend morning animation block. In 2006, the channel struck a deal with DreamWorks Animation to develop the studio's animated films into television series (such as The Penguins of Madagascar). Since the early 2010s, Nickelodeon Animation Studio has also produced series based on preexisting IP purchased by Paramount, such as Winx Club and Teenage Mutant Ninja Turtles. === Movies === Nickelodeon has produced a variety of original made-for-TV movies, which usually premiere in weekend evening timeslots or on school holidays. Nickelodeon also periodically acquires theatrically released feature films for broadcast on the channel. The channel occasionally airs feature films produced by the network's Nickelodeon Movies film production division (whose films are distributed by sister company Paramount Pictures). Although the film division bears the Nickelodeon brand name, the channel does not have access to most of the movies produced by its film unit. The majority of the live-action feature films produced under the Nickelodeon Movies banner are licensed for broadcast by various free-to-air and pay television outlets within the United States other than Nickelodeon (although the network has aired a few live-action Nickelodeon Movies releases such as Angus, Thongs and Perfect Snogging and Good Burger). Nickelodeon also advertises hour-long episodes of its original series as movies; though the "TV movie" versions of Nickelodeon's original series differ from traditional television films in that they have shorter running times (approximately 45 minutes, as opposed to 75–100 minute run times that most television movies have), and use a traditional multi-camera setup for regular episodes (unless the program is originally shot in the single-camera setup common of films) with some on-location filming. In 2002, Nickelodeon entered a long-standing broadcast partnership with Mattel to air films and specials based on the toy company's Barbie (and later Monster High) dolls. The first Barbie movie to air on Nickelodeon was Barbie as Rapunzel on November 24, 2002. The Barbie and Monster High films are usually aired under a brokered format in which Mattel purchases the time in order to promote the release of their films on DVD within a few days of the Nickelodeon premiere, an arrangement possible as Nickelodeon does not have to meet the Federal Communications Commission rules which disallow that arrangement for broadcast channels due to regulations banning paid programming to children. === Programming blocks === ==== Current ==== Nick Jr. – Nickelodeon currently broadcasts shows targeted at preschool-aged children on Monday through Fridays from 7 a.m. to 2 p.m. Eastern and Pacific Time (7:00 to 10:00 a.m. during the summer months, other designated school break periods, and on national holidays). The block primarily targets audiences of preschool age as Nickelodeon's usual audience of school-aged children are in school during the block's designated time period. Programs currently seen in this block include Paw Patrol, Peppa Pig (from the UK), Blaze and the Monster Machines, Ryan's Mystery Playdate, Blue's Clues & You!, Santiago of the Seas, and Baby Shark's Big Show!. Nick at Nite – Nickelodeon's nighttime programming service, which premiered on July 1, 1985, and broadcasts from prime time to early morning (the block's air time varies each night). Originally featured classic sitcoms from the 1950s and 1960s such as The Donna Reed Show, Mr. Ed and Lassie, programming eventually shifted towards repeats of popular sitcoms from the 1980s to the 2000s such as Home Improvement, The Cosby Show and Roseanne. In 1996, a pay television channel, TV Land (formerly Nick at Nite's TV Land, until 1997) based on the block, launched with a similar format of programs. Nick at Nite has also occasionally incorporated original scripted and competition series, with some in recent years produced through its parent network's Nickelodeon Productions unit. As of 2021, programming on Nick at Nite consists entirely of acquired shows such as Full House, Friends, Mom and Young Sheldon. Since 2004, Nielsen has broken out the television ratings of Nick at Nite and Nickelodeon as two separate networks. ==== Former ==== SNICK – "SNICK" (short for "Saturday Night Nickelodeon") was the network's first dedicated Saturday primetime block that aired from 8:00 to 10:00 p.m. Eastern and Pacific Time. Geared toward preteens and teenagers, it debuted on August 15, 1992 (with the initial lineup featuring two established series that originally aired on Sundays, Clarissa Explains It All and The Ren & Stimpy Show, and two new series, Roundhouse and Are You Afraid of the Dark?). The block mainly featured live-action series (primarily comedies), although it periodically featured animated series. SNICK was discontinued on January 29, 2005, and was replaced the following week (February 5, 2005) by a Saturday night edition of the TEENick block. Nick in the Afternoon – "Nick in the Afternoon" was a daytime block that ran on weekday afternoons during the summer months from 1995 to 1997, and aired in an extended format until December for its final year in 1998. It was hosted by Stick Stickly, a Mr. Bill-like popsicle stick character (puppeteered by Rick Lyon and voiced by actor Paul Christie, who would later voice the Noggin mascot Moose A. Moose). The block was replaced for Summer 1999 by "Henry and June's Summer" (hosted by the animated hosts of the anthology series KaBlam!). From 2011 to 2012, Stick Stickly returns to television for TeenNick's "The '90s Are All That" to host "U-Pick with Stick" on Friday nights as a concept of user-chosen programming. U-Pick Live – "U-Pick Live" (originally branded as "U-Pick Friday" from 1999 to late 2000, and originally hosted by the Henry and June characters from KaBlam!) was a block that aired weekday afternoons from 5:00 to 7:00 p.m. Eastern and Pacific Time from October 14, 2002, to May 27, 2005, which was broadcast from studios in New York City's Times Square district, where Nickelodeon is headquartered. Using a similar concept that originated in 1994 with the Nick in the Afternoon block, "U-Pick Live" allowed viewer interaction in selecting the programs (usually cartoons) that would air on the block via voting on the network's website. TEENick – "TEENick" was a teenage-oriented block that ran from March 4, 2001, to February 1, 2009, which ran on Sundays from 6:00 to 9:00 p.m. Eastern and Pacific Time; a secondary block on Saturdays launched in 2005, taking over the 8:00 to 10:00 p.m. Eastern/Pacific timeslot long held by SNICK. It was originally hosted by Nick Cannon, and then by Jason Everhart (aka "J. Boogie"). Beginning in January 2007, Noggin's own teenage-targeted block The N ran a spin-off block called "TEENick on The N." The TEENick name, which was removed on February 1, 2009, later became the name of the channel TeenNick on September 28, 2009. ME:TV – "ME:TV" was a short-lived live hosted afternoon block that ran during summer 2007, which ran on weekday afternoons from 2:00 to 6:00 p.m. Eastern/Pacific Time. Nick Saturday Nights – a primetime live-action block airing from 8:00 to 9:30 p.m. Eastern and Pacific Time. It was introduced on September 22, 2012, as Gotta See Saturday Nights. Recent episodes of certain original series may air when no new episodes are scheduled to air that week. Premieres of the network's original made-for-TV movies also occasionally aired during the primetime block, usually in the form of premiere showings. Saturday premieres were discontinued for the time being on December 11, 2021. Nick Studio 10 – "Nick Studio 10" was a short-lived late afternoon programming block that ran from February 18 to June 17, 2013, which ran weekdays from 4:00 to 6:00 p.m. Eastern and Pacific Time. The block featured wraparound segments based on episodes of the network's animated series, which were shown in an off-the-clock schedule due to the segments that aired following each program's individual acts. That New Thursday Night – a live-action comedy block airing from 7:00 to 8:00 p.m. Eastern and Pacific Time. The schedule features Danger Force, Tyler Perry's Young Dylan, That Girl Lay Lay, The Really Loud House, and Erin & Aaron (all first-run episodes are cycled on the schedule, giving it a variable schedule). It was discontinued on June 29, 2023. AfterToons – an animation block airing weekday afternoons and featuring new episodes of a rotating selection of Nickelodeon animated series. The series featured are SpongeBob SquarePants, The Loud House, The Patrick Star Show, Big Nate, Rugrats, and The Smurfs. It was discontinued on November 24, 2023. ==== Special events ==== Nickelodeon Kids' Choice Awards – The Kids' Choice Awards are a 90-minute-long annual live awards show held on the fourth Saturday night in March (formerly the first Saturday in April until 2008, but returned in 2011). The award show (whose winners are selected by Nickelodeon viewers though voting on the channel's website and through text messaging) honors popular television series and movies, actors, athletes and music acts, with winners receiving a hollow orange blimp figurine (one of the logo outlines used for much of the network's "splat logo" era from 1984 to 2009). Nickelodeon Kids' Choice Sports – A spin-off of the Kids' Choice Awards, "Kids Choice Sports" is held in July with the same KCA voting procedures and differing categories for team sports and athlete achievements for the past year (featuring categories such as "Best Male Athlete", "Best Female Athlete", "King Of Swag", and "Queen Of Swag"), along with the award featuring a sports-specific purple mohawk. Its inaugural ceremony aired on July 17, 2014. Nickelodeon HALO Awards – The HALO Awards features five ordinary teens who are Helping And Leading Others (HALO). Its inaugural ceremony aired on December 11, 2009. The awards show is hosted by Nick Cannon and airs on Nickelodeon and TeenNick every November/December until 2017. Worldwide Day of Play – The "Worldwide Day of Play" is an annual event held on a Saturday afternoon in late September that began on October 2, 2004, to mark the conclusion of the "Let's Just Play" campaign launched that year, which are both designed to influence kids to exercise and participate in outdoor activities; schools and educational organizations are also encouraged to host local events to promote activity among children during the event. Nickelodeon and its sister channels (except for the Pacific and Mountain Time Zone feeds and the Nick 2 Pacific feed that is distributed to the Eastern and Central Time Zones), some of the network's international channels and associated websites are suspended (with a message encouraging viewers to participate in outdoor activities during the period) from 12:00 to 3:00 p.m. Eastern and Pacific Time on the day of the event. Since 2010, the Worldwide Day of Play event became part of The Big Help program, as part of an added focus on healthy lifestyles in addition to the program's main focus on environmental issues. ==== Blocks on broadcast networks ==== Untitled UPN block – In 1998, Viacom's UPN entered into discussions with the network to produce a new block, but nothing ultimately materialized. Nickelodeon en Telemundo – On November 9, 1998, Telemundo introduced a daily block of Spanish dubs of Nickelodeon's series (such as Rugrats, Aaahh!!! Real Monsters, Hey Arnold!, Rocko's Modern Life, and Blue's Clues); the weekday edition of the block ran until September 5, 2000, when it was relegated to weekends in order to make room for the morning news program Hoy En El Mundo. Nickelodeon's contract with Telemundo ended in November 2001, after the network was acquired by NBC, though certain programs would return in 2004 as part of the Telemundo Kids block. Nick on CBS/Nick Jr. on CBS – On September 14, 2002, Nickelodeon began producing a two-hour Saturday morning block for CBS (which was co-owned with Nickelodeon at the time as a result of then-network parent Viacom's 1999 acquisition of CBS) to comply with the Children's Television Act. The block featured episodes of series such as As Told by Ginger, The Wild Thornberrys, Rugrats, Hey Arnold!, and Pelswick which premiered on most CBS stations. The block was retooled in 2004 as a preschool-oriented block featuring Nick Jr. shows (such as Blue's Clues, Dora the Explorer, and Little Bill); "Nick Jr. on CBS" was replaced in September 2006 by the KOL Secret Slumber Party block (produced by DIC Entertainment, which was subsequently acquired by Canada-based Cookie Jar (now WildBrain), as a result of CBS and Viacom's split into separate companies at the end of 2005, but re-merged in late 2019. == Related networks and services == === Current sister channels === ==== Nick Jr. Channel ==== Nick Jr. Channel (sometimes shortened to Nick Jr.) is a pay television network aimed mainly at children between 2 and 6 years of age. It features a mix of current and former preschool-oriented programs from Nickelodeon, as well as some shows that are exclusive to the channel. The Nick Jr. Channel launched on September 28, 2009, as a spin-off of Nickelodeon's preschool programming block of the same name, which had aired since January 4, 1988. The channel replaced Noggin, which was relaunched as a streaming service in 2015 and acts as a separate sister brand. Noggin's programming is distinct from the Nick Jr. channel's; it mainly carried preteen-oriented programs at its launch, and its 2015 streaming service features a variety of exclusive series. On October 1, 2012, the Nick Jr. Channel introduced NickMom, a four-hour nighttime block aimed at parents, which ran until September 28, 2015. While traditional advertising appeared on the channel during the NickMom block, the network otherwise only runs programming promotions and underwriter-style sponsorships in lieu of regular commercials. ==== Nicktoons ==== Nicktoons is a pay television network that launched on May 1, 2002, as Nicktoons TV; it was renamed Nicktoons in April 2003 and rebranded as Nicktoons Network in September 2005 before reverting to its previous name in September 2009. The network airs a mix of newer live-action and animated shows from Nickelodeon such as Henry Danger, The Fairly OddParents, The Loud House, SpongeBob SquarePants, and Teenage Mutant Ninja Turtles alongside original series airing exclusively on Nicktoons. ==== TeenNick ==== TeenNick is a pay television network that is aimed at adolescents and young adults, named after the TEENick block that aired on Nickelodeon from March 2001 to February 2009. The channel merged programming from the TEENick block with The N, a former block on Noggin. Although TeenNick has more relaxed program standards than the other Nickelodeon channels (save for Nick at Nite and the NickMom block on Nick Jr.) – allowing for moderate profanity, suggestive dialogue and some violent content – the network has shifted its lineup almost exclusively towards current and former Nickelodeon series (including some that are burned off due to low ratings on the flagship channel) that have stricter content standards. It also airs some acquired sitcoms and drama series. ==== NickMusic ==== NickMusic is a pay television network in the United States featuring music videos from artists appealing to Nickelodeon's target audience. It launched on the channel space formerly held by MTV Hits on September 9, 2016. Like its sibling music video-only networks BET Jams, BET Soul, and CMT Music, NickMusic is based on an automated "wheel" schedule that was introduced during the early years of MTV2. A new loop starts at 6 a.m. Eastern Time, and is then repeated at 2 p.m. and 10 p.m. Lyric videos are sometimes substituted due to content concerns with the artist's actual music video. The network launched on May 1, 2002, as MTV Hits, with its programming composed entirely of music videos. As with MTV Jams, the network was named for a daily program on MTV; in this case, MTV Hits, which was that network's main pop music video program. The network composed of current hit music videos, along with a few older videos from earlier in the year, as well as a few from the late 1990s. As both MTV Hits and NickMusic, the network has maintained a commercial-free format, other than internal promotions for Nickelodeon or MTV and MTV-branded properties. The network has no individual or original programs; TeenNick Top 10, a program shared with TeenNick, was cancelled in mid-2018. In electronic program listings, the titles of each 'block' merely delineate an hour in those listings and outside those titles denoting video theming, have no on-air mention. The network's specific theming to younger pop artists has also been underplayed as of 2024, due to various cuts at Paramount Global and the network's complete disassociation from further developing "triple threat" stars due to personnel and industry changes. === Former sister channels === Nickelodeon Games and Sports for Kids (commonly branded as Nickelodeon GAS or Nick GAS), was a pay television network that launched on March 1, 1999, as part of the suite of high-tier channels launched by MTV Networks. It ran a mix of game shows and other competition programs from Nickelodeon (essentially formatted as a children's version of—and Viacom's answer to—the Game Show Network). The channel formally ceased operations on December 31, 2007, and it was replaced by a short-lived 24-hour version of Noggin's teen-oriented block The N. However, an automated loop of Nick GAS continued to be carried on Dish Network due to unknown factors until April 23, 2009. NickMom (stylized as nickmom) was a programming block launched on October 1, 2012, airing in the late night hours on the Nick Jr. Channel. The block aired its own original programming aimed at parents until 2014, then began to carry acquired films and sitcoms. Due to Viacom's 2015 cutbacks involving acquired programming and low ratings, the NickMom block and associated website were discontinued in the early morning hours of September 28, 2015. Nick 2 was the off-air brand for a secondary timeshift channel of Nickelodeon formerly available on the high-tier packages exclusively on cable providers as a complement to the main Nickelodeon feed, repackaging Nickelodeon's Eastern and Pacific Time Zone feeds for the appropriate time zone – the Pacific feed was distributed to the Eastern and Central Time Zones, and the Eastern feed was distributed to the Pacific and Mountain Time Zones – resulting in the difference in local airtimes for a particular program between two geographic locations being three hours at most, allowing viewers a second chance to watch a program after its initial airing on the Eastern Time Zone feed or to watch the show ahead of its airing on the Pacific Time Zone feed of the main channel (for example, the Nick at Nite block would respectively start at 9:00 p.m (Sundays-Fridays) & At 10:30 p.m (Saturdays) Eastern on Nick 2 Pacific or 12:00 p.m. (weekdays) 10:00 a.m (weekends) Pacific weeknights on Nick 2 Eastern). Nick 2 would never broadcast in high definition, but the exception is through Xfinity's IPTV services. The service existed from around 2000 until November 2018, launching as Nick TOO. The timeshift channel was originally offered as part of the MTV Networks Digital Suite, a slate of channels exclusive to high-tier cable packages (many of the networks also earned satellite carriage over time), and was the only American example of two feeds of a non-premium service being provided to cable and IPTV providers. A Nick TOO logo was used on the channel until 2004, when MTV Networks decided to stop using customized branding on the feed (a logo for Nick 2 was only used for identification purposes on electronic program guides as a placeholder image); most television listings thus showed the additional channel under the brandings "Nick Pacific (NICKP)/Nick West (NICKW)," or "Nick East (NICKE)." DirecTV and Dish Network also offer both Nickelodeon feeds, though they carry both time zone feeds of most of the children's networks that the providers offer by default. Viacom Media Networks discontinued the Nick 2 digital cable service on November 22, 2018, likely due to video on demand options making timeshift channels for the most part superfluous. Both time zone feeds continue to be offered on Xfinity as well as satellite providers, unbranded. NickRewind (TeenNick block) On July 25, 2011, TeenNick began airing The '90s Are All That, renamed The Splat in October 2015, a block of Nickelodeon's most popular 1990s programming, targeting the network's target demographic from that era. After several name changes, the block was finally called "NickRewind" and focused on programming from the 1980s, 1990s, and 2000s (mainly the latter two), and aired nightly. On January 31, 2022, the block was discontinued, with TeenNick's overnight programming mainly consisting of regular reruns. === Other services === === Production studios === ==== Nickelodeon Animation Studio ==== Nickelodeon Animation Studio (formerly Games Animation, Inc.) is a production firm with two main locations (one in Burbank, California, and the other in New York City). They serve as the animation facilities for many of the network's Nicktoons and Nick Jr. series. ==== Nickelodeon Productions ==== Nickelodeon Productions is a production studio in New York, that provides original sitcoms, animated shows and game-related programs for Nickelodeon. Despite this, the studio's logo is also seen at the end of animated television shows. It was founded as Games Productions in 1987, after MTV Networks was purchased by Viacom. ==== Nickelodeon on Sunset ==== Nickelodeon on Sunset was a studio complex in Hollywood, California which served as the primary production facility for Nickelodeon's series from 1997 until 2017; the studio is designated by the National Register for Historic Places as a historical landmark as a result of its prior existence as the Earl Carroll Theater, a prominent dinner theater. It served as the production facilities for several Nickelodeon series. == Media == === Nickelodeon Games === Nickelodeon Games (formerly Nick Games from 2002 to 2009, from 1997 to 2002, Nickelodeon Software, and from 1993 to 1997, Nickelodeon Interactive) is the video gaming division of Nickelodeon. It was originally a part of Viacom Consumer Products, with early games being published by Viacom New Media. They started a long-standing relationship with game publisher THQ. THQ's relationship with the network started off when THQ published their Ren & Stimpy game for Nintendo consoles in 1992, followed by a full-fledged console deal in 1998 with several Rugrats titles, and expanded in 2001, when THQ acquired some of the assets from Mattel Interactive, namely the computer publishing rights, and all video game rights to The Wild Thornberrys. Nickelodeon also worked, alongside THQ on an original game concept, Tak and the Power of Juju. === Nick.com === Nick.com is Nickelodeon's main website, which launched in October 1995 as a component of America Online's Kids Only channel before eventually moving to the full World Wide Web. It provides content, as well as video clips and full episodes of Nickelodeon series available for streaming. The website's popularity grew to the point where in March 1999, Nick.com became the highest rated website among children aged 6–14 years old. Nickelodeon used the website in conjunction with television programs which increased traffic. In 2001, Nickelodeon partnered with Networks Inc. to provide broadband video games for rent from Nick.com; the move was a further step in the multimedia direction that the developers wanted to take the website. Skagerlind indicated that over 50% of Nick.com's audience were using a high speed connection, which allowed them to expand the gaming and video streaming options on the website. === Mobile apps === Nickelodeon released a free mobile app for smartphones and tablet computers operating on the Apple and Android platforms in February 2013. Like Nick.com, a TV Everywhere login code provided by participating subscription providers is required to view individual episodes of the network's series. In December 2023, Paramount Global announced that the app and all other Paramount owned apps would be discontinued soon. The apps were discontinued on January 31, 2024. === Nickelodeon Movies === Nickelodeon Movies is a motion picture production unit that was founded in 1995, as a family entertainment arm of Paramount Pictures (owned by Nickelodeon's corporate parent, Paramount Global). The first film released from the studio was the 1996 mystery/comedy Harriet the Spy. Nickelodeon Movies has produced films based on Nickelodeon animated programs including The Rugrats Movie and The SpongeBob SquarePants Movie, as well as other adaptations and original live-action and animated projects. === Nickelodeon Magazine === Nickelodeon Magazine was a print magazine that was launched in 1993; the channel had previously published a short-lived magazine effort in 1990. Nickelodeon Magazine incorporated informative non-fiction pieces, humor (including pranks and parodical pieces), interviews, recipes (such as green slime cake), and a comic book section in the center of each issue featuring original comics by leading underground cartoonists as well as strips about popular Nicktoons. It ceased publication after 16 years in December 2009, citing a sluggish magazine industry. A new version of the magazine was published by Papercutz from June 2015 to mid-2016. === Nick Radio === Nick Radio was a radio network that launched on September 30, 2013, in a partnership between both the network and iHeartMedia (then called Clear Channel Communications), which distributed the network mainly via its iHeartRadio web platform and mobile app. Its programming was also streamed via the Nick.com website and on New York City radio station WHTZ as a secondary HD channel. Nick Radio focused on Top 40 and pop music (geared towards the network's target audience of children, with radio edits of some songs incorporated due to inappropriate content), along with celebrity interview features. In addition to regular on-air DJs, Nick Radio also occasionally featured guest DJ stints by popular artists as well as stars from Nickelodeon's original series. Nick Radio shut down without warning on July 31, 2019, and was replaced by Hit Nation Junior, likely due to the network's general failure to establish any sustained "triple threat" artists/actors throughout the 2010s, along with the general failure of the children's-only radio format in the streaming age. == Themed experiences and hotels == === Nickelodeon Universe === Nickelodeon Universe at the Mall of America is the first indoor Nickelodeon theme park in the United States. Before being re-themed to Nickelodeon in 2007, the park was themed as "Camp Snoopy" and "The Park at MoA." The theme park contains a variety of Nickelodeon-themed rides, including: SpongeBob SquarePants: Rock Bottom Plunge, Fairly Odd Coaster, and Teenage Mutant Ninja Turtles: Shell Shock. Nickelodeon and Triple Five Group opened a second Nickelodeon Universe theme park in the American Dream Meadowlands complex on October 25, 2019. Upon opening, it became the largest indoor theme park in the western hemisphere, unseating the Mall of America's Nickelodeon Universe which had the title from 2008 to 2019. On August 18, 2009, Nickelodeon and Southern Star Amusements announced that it would build a Nickelodeon Universe in New Orleans, Louisiana on the site of the former Six Flags New Orleans by the end of 2010, which was set to be the first outdoor Nickelodeon Universe theme park. On November 9, 2009, Nickelodeon announced that it had ended the licensing agreement with Southern Star Amusements. === Theme park areas === Current attractions Nickland is an area inside of Movie Park Germany featuring Nickelodeon-themed rides, including a SpongeBob SquarePants-themed "Splash Battle" ride, and a Jimmy Neutron-themed roller coaster. Nickelodeon Land opened on May 4, 2011, at Blackpool Pleasure Beach, featuring several rides based on Nickelodeon series including SpongeBob SquarePants, Avatar: The Last Airbender, Dora the Explorer, and The Fairly OddParents. Nickelodeon Land opened in September 2015 at Sea World, featuring multiple rides based on Nickelodeon programs including a SpongeBob junior roller coaster, and a Teenage Mutant Ninja Turtles-themed flyer. Nickelodeon Land is also an area within Parque de Atracciones de Madrid. Opened in 2014, this area contains rides and attractions based on Jimmy Neutron, SpongeBob SquarePants, Paw Patrol, and other Nickelodeon franchises. Nickelodeon Playtime/Nickelodeon Adventure are two themed children's entertainment centers in Essex, England and Shenzhen, China. Play areas and attractions in these centers are immersively themed to SpongeBob SquarePants, Paw Patrol, and additional Nickelodeon shows. Closed areas Nickelodeon Universe was also an area inside of Paramount's Kings Island featuring Nickelodeon-themed rides and attractions. It was one of the largest sections in the park and was voted "Best Kid's Area" by Amusement Today magazine from 2001 until its closure in 2009 after the park's sale to Cedar Fair (the Paramount Parks ended up with CBS Corporation in the 2006 CBS/Viacom split, which CBS immediately sold off as soon as possible as non-critical surplus assets for that company). Nickelodeon Studios was an attraction at the Universal Orlando Resort that opened on June 7, 1990, and housed production for many Nickelodeon programs (including Clarissa Explains It All, What Would You Do? and All That). It closed on April 30, 2005, after Nickelodeon's production facilities were moved to New York City and Burbank, California. The building that formerly housed it was recently occupied by the Blue Man Group Sharp Aquos Theatre, closed in February 2021. Another Nickelodeon-themed attraction at the park, Jimmy Neutron's Nicktoon Blast, opened in 2003 but closed in 2011 to make way for the new ride Despicable Me: Minion Mayhem. In 2012, a store based on SpongeBob SquarePants opened in Woody Woodpecker's Kidzone, replacing Universal's Cartoon Store. Nickelodeon Central was an area inside of the Paramount Parks properties, including California's Great America, Carowinds, Kings Dominion, Canada's Wonderland, and Dreamworld that featured shows, attractions and themes featuring Nickelodeon characters, all of which were wound down when CBS Corporation was given ownership of the theme parks in the Viacom/CBS split and eventually sold most of the properties to Cedar Fair without renewal of the Nickelodeon licensing agreements. The only Nickelodeon Central remaining in existence was at Dreamworld in Australia, which is not under Cedar Fair ownership. The license was revoked in 2011 and became "Kid's World" and later DreamWorks Experience. Nickelodeon Blast Zone was an area in Universal Studios Hollywood that featured several attractions inspired by Nickelodeon shows. The four attractions that were present in the area were "Nickelodeon Splash", a waterpark-style area, "The Wild Thornberrys Adventure Temple", a jungle-themed foam ball play area, and "Nick Jr. Backyard", a medium-sized toddler playground. It ran from 2001 to 2007 and was rethemed as "The Adventures of Curious George" which closed in 2008 to make way for The Wizarding World of Harry Potter (Universal Studios Hollywood). Adjacent to Nickelodeon Blast Zone was the "Panasonic Theatre" which housed Totally Nickelodeon, an audience-participated game show which ran from 1997 to 2000. "Rugrats Magic Adventure" replaced the game show in 2001, but closed in 2002 to make way for Shrek 4-D which ran from May 2003 to August 2017. It closed to make way for DreamWorks Theatre Featuring Kung Fu Panda which opened on June 15, 2018. Nickelodeon Splat City was an area inside California's Great America (from 1995 to 2002), Kings Island (from 1995 to 2000) and Kings Dominion (from 1995 to 1999), that featured messy- and water-themed attractions. The slime refinery theme was carried out in the attractions such as the "Green Slime Zone Refinery", the "Crystal Slime Mining Maze", and the "Green Slime Transfer Truck". All of these areas were later transformed into either Nickelodeon Central or Nickelodeon Universe before being discontinued as mentioned above when sold off by CBS Corporation. === Hotel brands === Nickelodeon Suites Resort was a Nickelodeon-themed hotel in Orlando, Florida, located near the Universal Orlando Resort and 1-mile (1.6 km) from Walt Disney World. The hotel originally opened in 1999, and re-opened under its Nickelodeon re-theming in 2005. It included one-to-three bedroom themed kid suites, a water park area, arcade, and various forms of entertainment themed after Nickelodeon shows. It also contained a Nick at Nite-themed lounge area for adults. The property was re-themed to "Holiday Inn Resort Orlando Suites" on June 1, 2016. Nickelodeon Resorts by Marriott was a proposed hotel chain similar to the Nickelodeon Suites Resort, featuring a 110,000-square-foot (10,000 m2) waterpark area and 650 hotel rooms. Announced in 2007, the first location was scheduled to open in San Diego in 2010, however, the plans were canceled in 2009. Plans for the remaining 19 hotels originally slated to open remain unclear. Nickelodeon Hotels & Resorts is a hotel chain that opened its first location in Punta Cana, Dominican Republic in 2016, in association with Karisma Hotels and Resorts. The second location opened in Riviera Maya, Mexico in 2021. A third location is in development for a 2025 opening, and will be connected to the Land of Legends theme park in Antalya, Turkey. A fourth location is in development for Everest Place in Orlando, Florida for a 2026 opening, and a fifth location is currently in development for a 2027 opening in Garden Grove, California. === Cruises === Nickelodeon at Sea is a series of Nickelodeon-themed cruise packages in partnership with Norwegian Cruise Line. They feature special amenities and entertainment themed to various Nickelodeon properties. This was later removed in 2015. Norwegian Cruise Line also hosted some Nickelodeon Cruises on the Norwegian Jewel and Norwegian Epic liners, as part of Nickelodeon at Sea. == International == Between 1993 and 1995, Nickelodeon opened international channels in the United Kingdom, Australia, and Germany; by the later year, the network had provided its programming to broadcasters in 70 countries. Since the mid-1990s and early 2000s, Nickelodeon as a brand has expanded into include language- or culture-specific channels for various other territories in different parts of the world including Europe, Asia, Oceania, and Canada, and has licensed some of its cartoons and other content, in English and local languages, to free-to-air networks and subscription channels such as KI.KA and Super RTL in Germany, RTÉ Two (English language) and TG4 (Irish language) in Ireland, YTV (in English) and Vrak.TV (in French, defunct) in Canada, Canal J in France, Alpha Kids in Greece, CNBC-e in Turkey and Network 10's localised version of Nickelodeon in Australia. == Notes == == See also == List of Nickelodeon novelizations Nicktoons == References == == Bibliography == Hendershot, Heather, ed. (2004). Nickelodeon Nation: The History, Politics, and Economics of America's Only TV Channel for Kids. New York: New York University Press. ISBN 0-8147-3652-1. Klickstein, Mathew (2013). SLIMED! An Oral History of Nickelodeon's Golden Age. New York: Plume. ISBN 978-0-14-219685-4. == External links == Official website
https://en.wikipedia.org/wiki/Nickelodeon
A research program (British English: research programme) is a professional network of scientists conducting basic research. The term was used by philosopher of science Imre Lakatos to blend and revise the normative model of science offered by Karl Popper's The Logic of Scientific Discovery (with its idea of falsifiability) and the descriptive model of science offered by Thomas Kuhn's The Structure of Scientific Revolutions (with its ideas of normal science and paradigm shifts). Lakatos found falsificationism impractical and often not practiced, and found normal science—where a paradigm of science, mimicking an exemplar, extinguishes differing perspectives—more monopolistic than actual. Lakatos found that many research programs coexisted. Each had a hard core of theories immune to revision, surrounded by a protective belt of malleable theories. A research programme vies against others to be most progressive. Extending the research program's theories into new domains is theoretical progress, and experimentally corroborating such is empirical progress, always refusing falsification of the research program's hard core. A research program might degenerate—lose progressiveness—but later return to progressiveness. == References == == Examples == United States Global Change Research Program World Climate Research Programme
https://en.wikipedia.org/wiki/Research_program
In television and motion pictures, a tentpole or tent-pole is a program or film that supports the financial performance of a film studio, television network, or cinema chain. It is an analogy for the way a strong central pole provides a stable structure to a tent. A tent-pole film may be expected to support the sale of tie-in merchandise. == Types == In the film industry, tent-poles are sometimes widely released initial offerings in a string of releases and are expected by studios to turn a profit in a short period of time. Such programming is often accompanied by larger budgets and heavy promotion. A tentpole movie, for example, is a film that is expected to support a wide range of ancillary tie-in products such as toys and games. == Examples == An example of this strategy in television is to schedule a popular television program alongside new or unknown programming, in an attempt to keep audience viewers watching after the flagship program is over; a prominent example is the long-running Star Trek series. A related concept is the hammock: if a network has two tent-pole series, it can boost the performance of a weak or emerging show by inserting it in the schedule between the two tent-poles. == See also == Aftershow Audience flow Blockbuster (entertainment) Event movie Four-quadrant movie List of highest-grossing films == References ==
https://en.wikipedia.org/wiki/Tentpole
In software engineering and programming language theory, the abstraction principle (or the principle of abstraction) is a basic dictum that aims to reduce duplication of information in a program (usually with emphasis on code duplication) whenever practical by making use of abstractions provided by the programming language or software libraries. The principle is sometimes stated as a recommendation to the programmer, but sometimes stated as a requirement of the programming language, assuming it is self-understood why abstractions are desirable to use. The origins of the principle are uncertain; it has been reinvented a number of times, sometimes under a different name, with slight variations. When read as recommendations to the programmer, the abstraction principle can be generalized as the "don't repeat yourself" (DRY) principle, which recommends avoiding the duplication of information in general, and also avoiding the duplication of human effort involved in the software development process. == The principle == As a recommendation to the programmer, in its formulation by Benjamin C. Pierce in Types and Programming Languages (2002), the abstraction principle reads (emphasis in original): Each significant piece of functionality in a program should be implemented in just one place in the source code. Where similar functions are carried out by distinct pieces of code, it is generally beneficial to combine them into one by abstracting out the varying parts. As a requirement of the programming language, in its formulation by David A. Schmidt in The structure of typed programming languages (1994), the abstraction principle reads:. The phrases of any semantically meaningful syntactic class may be named. == History and variations == The abstraction principle is mentioned in several books. Some of these, together with the formulation if it is succinct, are listed below. Alfred John Cole, Ronald Morrison (1982) An introduction to programming with S-algol: "[Abstraction] when applied to language design is to define all the semantically meaningful syntactic categories in the language and allow an abstraction over them". Bruce J. MacLennan (1983) Principles of programming languages: design, evaluation, and implementation: "Avoid requiring something to be stated more than once; factor out the recurring pattern". Jon Pearce (1998) Programming and Meta-Programming in Scheme: "Structure and function should be independent". The principle plays a central role in design patterns in object-oriented programming, although most writings on that topic do not give a name to the principle. The Design Patterns book by the Gang of Four, states: "The focus here is encapsulating the concept that varies, a theme of many design patterns." This statement has been rephrased by other authors as "Find what varies and encapsulate it." In this century, the principle has been reinvented in extreme programming under the slogan "Once and Only Once". The definition of this principle was rather succinct in its first appearance: "no duplicate code". It has later been elaborated as applicable to other issues in software development: "Automate every process that's worth automating. If you find yourself performing a task many times, script it." == Implications == The abstraction principle is often stated in the context of some mechanism intended to facilitate abstraction. The basic mechanism of control abstraction is a function or subroutine. Data abstractions include various forms of type polymorphism. More elaborate mechanisms that may combine data and control abstractions include: abstract data types, including classes, polytypism etc. The quest for richer abstractions that allow less duplication in complex scenarios is one of the driving forces in programming language research and design. Inexperienced programmers may be tempted to introduce too much abstraction in their program—abstraction that won't be used more than once. A complementary principle that emphasizes this issue is "You Ain't Gonna Need It" and, more generally, the KISS principle. Since code is usually subject to revisions, following the abstraction principle may entail refactoring code. The effort of rewriting a piece of code generically needs to be amortized against the estimated future benefits of an abstraction. A rule of thumb governing this was devised by Martin Fowler, and popularized as the rule of three. It states that if a piece of code is copied more than twice, i.e. it would end up having three or more copies, then it needs to be abstracted out. == Generalizations == "Don't repeat yourself", or the "DRY principle", is a generalization developed in the context of multi-tier architectures, where related code is by necessity duplicated to some extent across tiers, usually in different languages. In practical terms, the recommendation here is to rely on automated tools, like code generators and data transformations to avoid repetition. == Hardware programming interfaces == In addition to optimizing code, a hierarchical/recursive meaning of Abstraction level in programming also refers to the interfaces between hardware communication layers, also called "abstraction levels" and "abstraction layers." In this case, level of abstraction often is synonymous with interface. For example, in examining shellcode and the interface between higher and lower level languages, the level of abstraction changes from operating system commands (for example, in C) to register and circuit level calls and commands (for example, in assembly and binary). In the case of that example, the boundary or interface between the abstraction levels is the stack. == References ==
https://en.wikipedia.org/wiki/Abstraction_principle_(computer_programming)
Execution in computer and software engineering is the process by which a computer or virtual machine interprets and acts on the instructions of a computer program. Each instruction of a program is a description of a particular action which must be carried out, in order for a specific problem to be solved. Execution involves repeatedly following a "fetch–decode–execute" cycle for each instruction done by the control unit. As the executing machine follows the instructions, specific effects are produced in accordance with the semantics of those instructions. Programs for a computer may be executed in a batch process without human interaction or a user may type commands in an interactive session of an interpreter. In this case, the "commands" are simply program instructions, whose execution is chained together. The term run is used almost synonymously. A related meaning of both "to run" and "to execute" refers to the specific action of a user starting (or launching or invoking) a program, as in "Please run the application." == Process == Prior to execution, a program must first be written. This is generally done in source code, which is then compiled at compile time (and statically linked at link time) to produce an executable. This executable is then invoked, most often by an operating system, which loads the program into memory (load time), possibly performs dynamic linking, and then begins execution by moving control to the entry point of the program; all these steps depend on the Application Binary Interface of the operating system. At this point execution begins and the program enters run time. The program then runs until it ends, either in a normal termination or a crash. == Executable == Executable code, an executable file, or an executable program, sometimes simply referred to as an executable or binary, is a list of instructions and data to cause a computer "to perform indicated tasks according to encoded instructions", as opposed to a data file that must be interpreted (parsed) by a program to be meaningful. The exact interpretation depends upon the use. "Instructions" is traditionally taken to mean machine code instructions for a physical CPU. In some contexts, a file containing scripting instructions (such as bytecode) may also be considered executable. == Context of execution == The context in which execution takes place is crucial. Very few programs execute on a bare machine. Programs usually contain implicit and explicit assumptions about resources available at the time of execution. Most programs execute within multitasking operating system and run-time libraries specific to the source language that provide crucial services not supplied directly by the computer itself. This supportive environment, for instance, usually decouples a program from direct manipulation of the computer peripherals, providing more general, abstract services instead. === Context switching === In order for programs and interrupt handlers to work without interference and share the same hardware memory and access to the I/O system, in a multitasking operating system running on a digital system with a single CPU/MCU, it is required to have some sort of software and hardware facilities to keep track of an executing process's data (memory page addresses, registers etc.) and to save and recover them back to the state they were in before they were suspended. This is achieved by a context switching.: 3.3  The running programs are often assigned a Process Context IDentifiers (PCID). In Linux-based operating systems, a set of data stored in registers is usually saved into a process descriptor in memory to implement switching of context. PCIDs are also used. == Runtime == Runtime, run time, or execution time is the final phase of a computer program's life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program. A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which may or may not be caught and handled by any particular computer language. === Implementation details === When a program is to be executed, a loader first performs the necessary memory setup and links the program with any dynamically linked libraries it needs, and then the execution begins starting from the program's entry point. In some cases, a language or implementation will have these tasks done by the language runtime instead, though this is unusual in mainstream languages on common consumer operating systems. Some program debugging can only be performed (or is more efficient or accurate when performed) at runtime. Logic errors and array bounds checking are examples. For this reason, some programming bugs are not discovered until the program is tested in a production environment with real data, despite sophisticated compile-time checking and pre-release testing. In this case, the end-user may encounter a "runtime error" message. === Application errors (exceptions) === Exception handling is one language feature designed to handle runtime errors, providing a structured way to catch completely unexpected situations as well as predictable errors or unusual results without the amount of inline error checking required of languages without it. More recent advancements in runtime engines enable automated exception handling which provides "root-cause" debug information for every exception of interest and is implemented independent of the source code, by attaching a special software product to the runtime engine. == Runtime system == A runtime system, also called runtime environment, primarily implements portions of an execution model. This is not to be confused with the runtime lifecycle phase of a program, during which the runtime system is in operation. When treating the runtime system as distinct from the runtime environment (RTE), the first may be defined as a specific part of the application software (IDE) used for programming, a piece of software that provides the programmer a more convenient environment for running programs during their production (testing and similar), while the second (RTE) would be the very instance of an execution model being applied to the developed program which is itself then run in the aforementioned runtime system. Most programming languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the management of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. The compiler makes assumptions depending on the specific runtime system to generate correct code. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language. == Instruction cycle == The instruction cycle (also known as the fetch–decode–execute cycle, or simply the fetch-execute cycle) is the cycle that the central processing unit (CPU) follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, and the execute stage. In simpler CPUs, the instruction cycle is executed sequentially, each instruction being processed before the next one is started. In most modern CPUs, the instruction cycles are instead executed concurrently, and often in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, which is possible because the cycle is broken up into separate steps. == Interpreter == A system that executes a program is called an interpreter of the program. Loosely speaking, an interpreter directly executes a program. This contrasts with a language translator that converts a program from one language to another before it is executed. == Virtual machine == A virtual machine (VM) is the virtualization/emulation of a computer system. Virtual machines are based on computer architectures and provide functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination. Virtual machines differ and are organized by their function, shown here: System virtual machines (also termed full virtualization VMs) provide a substitute for a real machine. They provide functionality needed to execute entire operating systems. A hypervisor uses native execution to share and manage hardware, allowing for multiple environments which are isolated from one another, yet exist on the same physical machine. Modern hypervisors use hardware-assisted virtualization, virtualization-specific hardware, primarily from the host CPUs. Process virtual machines are designed to execute computer programs in a platform-independent environment. Some virtual machine emulators, such as QEMU and video game console emulators, are designed to also emulate (or "virtually imitate") different system architectures thus allowing execution of software applications and operating systems written for another CPU or architecture. OS-level virtualization allows the resources of a computer to be partitioned via the kernel. The terms are not universally interchangeable. == See also == Executable Run-time system Runtime program phase Program counter == References ==
https://en.wikipedia.org/wiki/Execution_(computing)
ML (Meta Language) is a general-purpose, high-level, functional programming language. It is known for its use of the polymorphic Hindley–Milner type system, which automatically assigns the data types of most expressions without requiring explicit type annotations (type inference), and ensures type safety; there is a formal proof that a well-typed ML program does not cause runtime type errors. ML provides pattern matching for function arguments, garbage collection, imperative programming, call-by-value and currying. While a general-purpose programming language, ML is used heavily in programming language research and is one of the few languages to be completely specified and verified using formal semantics. Its types and pattern matching make it well-suited and commonly used to operate on other formal languages, such as in compiler writing, automated theorem proving, and formal verification. == Overview == Features of ML include a call-by-value evaluation strategy, first-class functions, automatic memory management through garbage collection, parametric polymorphism, static typing, type inference, algebraic data types, pattern matching, and exception handling. ML uses static scoping rules. ML can be referred to as an impure functional language, because although it encourages functional programming, it does allow side-effects (like languages such as Lisp, but unlike a purely functional language such as Haskell). Like most programming languages, ML uses eager evaluation, meaning that all subexpressions are always evaluated, though lazy evaluation can be achieved through the use of closures. Thus, infinite streams can be created and used as in Haskell, but their expression is indirect. ML's strengths are mostly applied in language design and manipulation (compilers, analyzers, theorem provers), but it is a general-purpose language also used in bioinformatics and financial systems. ML was developed by Robin Milner and others in the early 1970s at the University of Edinburgh, and its syntax is inspired by ISWIM. Historically, ML was conceived to develop proof tactics in the LCF theorem prover (whose language, pplambda, a combination of the first-order predicate calculus and the simply typed polymorphic lambda calculus, had ML as its metalanguage). Today there are several languages in the ML family; the three most prominent are Standard ML (SML), OCaml and F#. Ideas from ML have influenced numerous other languages, like Haskell, Cyclone, Nemerle, ATS, and Elm. == Examples == The following examples use the syntax of Standard ML. Other ML dialects such as OCaml and F# differ in small ways. === Factorial === The factorial function expressed as pure ML: This describes the factorial as a recursive function, with a single terminating base case. It is similar to the descriptions of factorials found in mathematics textbooks. Much of ML code is similar to mathematics in facility and syntax. Part of the definition shown is optional, and describes the types of this function. The notation E : t can be read as expression E has type t. For instance, the argument n is assigned type integer (int), and fac (n : int), the result of applying fac to the integer n, also has type integer. The function fac as a whole then has type function from integer to integer (int -> int), that is, fac accepts an integer as an argument and returns an integer result. Thanks to type inference, the type annotations can be omitted and will be derived by the compiler. Rewritten without the type annotations, the example looks like: The function also relies on pattern matching, an important part of ML programming. Note that parameters of a function are not necessarily in parentheses but separated by spaces. When the function's argument is 0 (zero) it will return the integer 1 (one). For all other cases the second line is tried. This is the recursion, and executes the function again until the base case is reached. This implementation of the factorial function is not guaranteed to terminate, since a negative argument causes an infinite descending chain of recursive calls. A more robust implementation would check for a nonnegative argument before recursing, as follows: The problematic case (when n is negative) demonstrates a use of ML's exception system. The function can be improved further by writing its inner loop as a tail call, such that the call stack need not grow in proportion to the number of function calls. This is achieved by adding an extra, accumulator, parameter to the inner function. At last, we arrive at === List reverse === The following function reverses the elements in a list. More precisely, it returns a new list whose elements are in reverse order compared to the given list. This implementation of reverse, while correct and clear, is inefficient, requiring quadratic time for execution. The function can be rewritten to execute in linear time: This function is an example of parametric polymorphism. That is, it can consume lists whose elements have any type, and return lists of the same type. === Modules === Modules are ML's system for structuring large projects and libraries. A module consists of a signature file and one or more structure files. The signature file specifies the API to be implemented (like a C header file, or Java interface file). The structure implements the signature (like a C source file or Java class file). For example, the following define an Arithmetic signature and an implementation of it using Rational numbers: These are imported into the interpreter by the 'use' command. Interaction with the implementation is only allowed via the signature functions, for example it is not possible to create a 'Rat' data object directly via this code. The 'structure' block hides all the implementation detail from outside. ML's standard libraries are implemented as modules in this way. == See also == Standard ML and Standard ML § Implementations Dependent ML: a dependently typed extension of ML ATS: a further development of dependent ML Lazy ML: an experimental lazily evaluated ML dialect from the early 1980s PAL (programming language): an educational language related to ML OCaml: an ML dialect used to implement Coq and various software F#: an open-source cross-platform functional-first language for the .NET framework == References == == Further reading == == External links == Standard ML of New Jersey, another popular implementation F#, an ML implementation using the Microsoft .NET framework Archived 2010-02-18 at the Wayback Machine MLton, a whole-program optimizing Standard ML compiler CakeML, a read-eval-print loop version of ML with formally verified runtime and translation to assembler
https://en.wikipedia.org/wiki/ML_(programming_language)
In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database. Before digital storage and retrieval of data have become widespread, index cards were used for data storage in a wide range of applications and environments: in the home to record and store recipes, shopping lists, contact information and other organizational data; in business to record presentation notes, project research and notes, and contact information; in schools as flash cards or other visual aids; and in academic research to hold data such as bibliographical citations or notes in a card file. Professional book indexers used index cards in the creation of book indexes until they were replaced by indexing software in the 1980s and 1990s. Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spans formal techniques and practical considerations, including data modeling, efficient data representation and storage, query languages, security and privacy of sensitive data, and distributed computing issues, including supporting concurrent access and fault tolerance. Computer scientists may classify database management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, collectively referred to as NoSQL, because they use different query languages. == Terminology and overview == Formally, a "database" refers to a set of related data accessed through the use of a "database management system" (DBMS), which is an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized. Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it. Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system. Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups: Data definition – Creation, modification and removal of definitions that detail how the data is to be organized. Update – Insertion, modification, and deletion of the data itself. Retrieval – Selecting data according to specified criteria (e.g., a query, a position in a hierarchy, or a position in relation to other data) and providing that data either directly to the user, or making it available for further processing by the database itself or by other applications. The retrieved data may be made available in a more or less direct form without modification, as it is stored in the database, or in a new form obtained by altering it or combining it with existing data from the database. Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure. Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database. Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large-volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions. Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans. Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security. == History == The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid-1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational. The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another. The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2018 they remain dominant: IBM Db2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardized SQL for the relational model, has influenced database languages for other data models. Object databases were developed in the 1980s to overcome the inconvenience of object–relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object–relational databases. The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key–value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs. === 1960s, navigational DBMS === The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense. As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market. The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods: Use of a primary key (known as a CALC key, typically implemented by hashing) Navigating relationships (called sets) from one record to another Scanning all the records in a sequential order Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However, CODASYL databases were complex and required significant training and effort to produce useful applications. IBM also had its own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman's 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems' TOTAL databases are classified as network databases. IMS remains in use as of 2014. === 1970s, relational DBMS === Edgar F. Codd worked at IBM in San Jose, California, in one of their offshoot offices that were primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks. In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organize the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated. Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based. The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit. In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys. For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided. As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic. Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard. IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs. In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998. === Integrated approach === In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at a lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine. Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However, this idea is still pursued in certain applications by some companies like Netezza and Oracle (Exadata). === Late 1970s, SQL DBMS === IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (IBM Db2). Larry Ellison's Oracle Database (or more simply, Oracle) started from a different chain, based on IBM's papers on System R. Though Oracle V1 implementations were completed in 1978, it was not until Oracle Version 2 when Ellison beat IBM to market in 1979. Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions). In Sweden, Codd's paper was also read and Mimer SQL was developed in the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two has become irrelevant. === 1980s, on the desktop === The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s. === 1990s, object-oriented === The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be related to objects and their attributes and not to individual fields. The term "object–relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object–relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object–relational mappings (ORMs) attempt to solve the same problem. === 2000s, NoSQL and NewSQL === XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records. NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally. In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem, it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency. NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system. == Use cases == Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software). Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database. == Classification == One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases. An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment. An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers. A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and used by end-users through a web browser and Open APIs. Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use. A deductive database combines logic programming with a relational database. A distributed database is one in which both the data and the DBMS span multiple computers. A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured, information. Document-oriented databases are one of the main categories of NoSQL databases. An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application's end-users and requires little or no ongoing maintenance. End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases. A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view. Sometimes the term multi-database is used as a synonym for federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases. A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases. An array DBMS is a kind of NoSQL DBMS that allows modeling, storage, and retrieval of (usually large) multi-dimensional arrays such as satellite images and climate simulation output. In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database. A knowledge base (abbreviated KB, kb or Δ) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences. A mobile database can be carried on or synchronized from a mobile computing device. Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business's customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings. A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries. The major parallel DBMS architectures which are induced by the underlying hardware architecture are: Shared memory architecture, where multiple processors share the main memory space, as well as other data storage. Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage. Shared-nothing architecture, where each processing unit has its own main memory and other storage. Probabilistic databases employ fuzzy logic to draw inferences from imprecise data. Real-time databases process transactions fast enough for the result to come back and be acted on right away. A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?". A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time. A terminology-oriented database builds upon an object-oriented database, often customized for a specific field. An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging. == Database management system == Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database." Examples of DBMS's include MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle Database, and Microsoft Access. The DBMS acronym is sometimes extended to indicate the underlying database model, with RDBMS for the relational, OODBMS for the object (oriented) and ORDBMS for the object–relational model. Other extensions can indicate some other characteristics, such as DDBMS for a distributed database management systems. The functionality provided by a DBMS can vary enormously. The core functionality is the storage, retrieval and update of data. Codd proposed the following functions and services a fully-fledged general purpose DBMS should provide: Data storage, retrieval and update User accessible catalog or data dictionary describing the metadata Support for transactions and concurrency Facilities for recovering the database should it become damaged Support for authorization of access and update of data Access support from remote locations Enforcing constraints to ensure data in the database abides by certain rules It is also generally to be expected the DBMS will provide a set of utilities for such purposes as may be necessary to administer the database effectively, including import, export, monitoring, defragmentation and analysis utilities. The core part of the DBMS interacting between the database and the application interface sometimes referred to as the database engine. Often DBMSs will have configuration parameters that can be statically and dynamically tuned, for example the maximum amount of main memory on a server the database can use. The trend is to minimize the amount of manual configuration, and for cases such as embedded databases the need to target zero-administration is paramount. The large major enterprise DBMSs have tended to increase in size and functionality and have involved up to thousands of human years of development effort throughout their lifetime. Early multi-user DBMS typically only allowed for the application to reside on the same computer with access via terminals or terminal emulation software. The client–server architecture was a development where the application resided on a client desktop and the database on a server allowing the processing to be distributed. This evolved into a multitier architecture incorporating application servers and web servers with the end user interface via a web browser with the database only directly connected to the adjacent tier. A general-purpose DBMS will provide public application programming interfaces (API) and optionally a processor for database languages such as SQL to allow applications to be written to interact with and manipulate the database. A special purpose DBMS may use a private API and be specifically customized and linked to a single application. For example, an email system performs many of the functions of a general-purpose DBMS such as message insertion, message deletion, attachment handling, blocklist lookup, associating messages an email address and so forth however these functions are limited to what is required to handle email. == Application == External interaction with the database will be via an application program that interfaces with the DBMS. This can range from a database tool that allows users to execute SQL queries textually or graphically, to a website that happens to use a database to store and search information. === Application program interface === A programmer will code interactions to the database (sometimes referred to as a datasource) via an application program interface (API) or via a database language. The particular API or language chosen will need to be supported by DBMS, possibly indirectly via a preprocessor or a bridging API. Some API's aim to be database independent, ODBC being a commonly known example. Other common API's include JDBC and ADO.NET. == Database languages == Database languages are special-purpose languages, which allow one or more of the following tasks, sometimes distinguished as sublanguages: Data control language (DCL) – controls access to data; Data definition language (DDL) – defines data types such as creating, altering, or dropping tables and the relationships among them; Data manipulation language (DML) – performs tasks such as inserting, updating, or deleting data occurrences; Data query language (DQL) – allows searching for information and computing derived information. Database languages are specific to a particular data model. Notable examples include: SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and are supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs. OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL. XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and Db2, and also by in-memory XML processors such as Saxon. SQL/XML combines XQuery with SQL. A database language may also incorporate features like: DBMS-specific configuration and storage engine management Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car) Application programming interface version of the query language, for programmer convenience == Storage == Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Databases as digital objects contain three layers of information which must be stored: the data, the structure, and the semantics. Proper storage of all three layers is needed for future preservation and longevity of the database. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often using the operating systems' file systems as intermediates for storage layout), storage properties and configuration settings are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look at the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database). Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database. Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases. === Materialized views === Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy. === Replication === Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to the same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated. === Virtualization === With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach. == Security == Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program). Database access control deals with controlling who (a person or a certain computer program) are allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or using specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces. This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases. Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption). Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this in the database. Monitoring can be set up to attempt to detect security breaches. Therefore, organizations must take database security seriously because of the many benefits it provides. Organizations will be safeguarded from security breaches and hacking activities like firewall intrusion, virus spread, and ransom ware. This helps in protecting the company's essential information, which cannot be shared with outsiders at any cause. == Transactions and concurrency == Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring or releasing a lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands). The acronym ACID describes some ideal properties of a database transaction: atomicity, consistency, isolation, and durability. == Migration == A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This is in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help import databases from other popular DBMSs. == Building, maintaining, and tuning == After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be used for this purpose. A DBMS provides the needed user interfaces to be used by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.). When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation. After the database is created, initialized and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc. == Backup and restore == Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are used to restore that state. == Static analysis == Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database systems has many interesting applications, in particular, for security purposes, such as fine-grained access control, watermarking, etc. == Miscellaneous features == Other DBMS features might include: Database logs – This helps in keeping a history of the executed functions. Graphics component for producing graphs and charts, especially in a data warehouse system. Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine. Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc. Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as "DevOps for database". == Design and modeling == The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity–relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organization, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes. Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data. Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modeling notation used to express that design). The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency. The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like, which depend on the particular DBMS. This is often called physical database design, and the output is the physical data model. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS. Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself. === Models === A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format. Common logical data models for databases include: Navigational databases Hierarchical database model Network model Graph database Relational model Entity–relationship model Enhanced entity–relationship model Object model Document model Entity–attribute–value model Star schema An object–relational database combines the two related structures. Physical data models include: Inverted index Flat file Other models include: Multidimensional model Array model Multivalue model Specialized models are optimized for particular types of data: XML database Semantic model Content store Event store Time series model === External, conceptual, and internal views === A database management system provides three views of the database data: The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level. The conceptual level (or logical level) unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators. The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities. While there is typically only one conceptual and internal view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are in the interest of the human resources department. Thus different departments need different views of the company's database. The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance. The conceptual view provides a level of indirection between internal and external. On the one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types. == Research == Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, related concurrency control techniques, query languages and query optimization methods, RAID, and more. The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE). == See also == == Notes == == References == == Sources == == Further reading == Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. ISBN 978-0-387-49616-0. Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992. Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007. Raghu Ramakrishnan and Johannes Gehrke, Database Management Systems. Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts. Lightstone, S.; Teorey, T.; Nadeau, T. (2007). Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more. Morgan Kaufmann Press. ISBN 978-0-12-369389-1. Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005. ISBN 0-12-685352-5. CMU Database courses playlist MIT OCW 6.830 | Fall 2010 | Database Systems Berkeley CS W186 == External links == DB File extension – information about files with the DB extension
https://en.wikipedia.org/wiki/Database
In computing, aspect-oriented programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding behavior to existing code (an advice) without modifying the code, instead separately specifying which code is modified via a "pointcut" specification, such as "log all function calls when the function's name begins with 'set'". This allows behaviors that are not central to the business logic (such as logging) to be added to a program without cluttering the code of core functions. AOP includes programming methods and tools that support the modularization of concerns at the level of the source code, while aspect-oriented software development refers to a whole engineering discipline. Aspect-oriented programming entails breaking down program logic into cohesive areas of functionality (so-called concerns). Nearly all programming paradigms support some level of grouping and encapsulation of concerns into separate, independent entities by providing abstractions (e.g., functions, procedures, modules, classes, methods) that can be used for implementing, abstracting, and composing these concerns. Some concerns "cut across" multiple abstractions in a program, and defy these forms of implementation. These concerns are called cross-cutting concerns or horizontal concerns. Logging exemplifies a cross-cutting concern because a logging strategy must affect every logged part of the system. Logging thereby crosscuts all logged classes and methods. All AOP implementations have some cross-cutting expressions that encapsulate each concern in one place. The difference between implementations lies in the power, safety, and usability of the constructs provided. For example, interceptors that specify the methods to express a limited form of cross-cutting, without much support for type-safety or debugging. AspectJ has a number of such expressions and encapsulates them in a special class, called an aspect. For example, an aspect can alter the behavior of the base code (the non-aspect part of a program) by applying advice (additional behavior) at various join points (points in a program) specified in a quantification or query called a pointcut (that detects whether a given join point matches). An aspect can also make binary-compatible structural changes to other classes, such as adding members or parents. == History == AOP has several direct antecedents A1 and A2: reflection and metaobject protocols, subject-oriented programming, Composition Filters, and Adaptive Programming. Gregor Kiczales and colleagues at Xerox PARC developed the explicit concept of AOP and followed this with the AspectJ AOP extension to Java. IBM's research team pursued a tool approach over a language design approach and in 2001 proposed Hyper/J and the Concern Manipulation Environment, which have not seen wide use. The examples in this article use AspectJ. The Microsoft Transaction Server is considered to be the first major application of AOP followed by Enterprise JavaBeans. == Motivation and basic concepts == Typically, an aspect is scattered or tangled as code, making it harder to understand and maintain. It is scattered by the function (such as logging) being spread over a number of unrelated functions that might use its function, possibly in entirely unrelated systems or written in different languages. Thus, changing logging can require modifying all affected modules. Aspects become tangled not only with the mainline function of the systems in which they are expressed but also with each other. Changing one concern thus entails understanding all the tangled concerns or having some means by which the effect of changes can be inferred. For example, consider a banking application with a conceptually very simple method for transferring an amount from one account to another: However, this transfer method overlooks certain considerations that a deployed application would require, such as verifying that the current user is authorized to perform this operation, encapsulating database transactions to prevent accidental data loss, and logging the operation for diagnostic purposes. A version with all those new concerns might look like this: In this example, other interests have become tangled with the basic functionality (sometimes called the business logic concern). Transactions, security, and logging all exemplify cross-cutting concerns. Now consider what would happen if we suddenly need to change the security considerations for the application. In the program's current version, security-related operations appear scattered across numerous methods, and such a change would require major effort. AOP tries to solve this problem by allowing the programmer to express cross-cutting concerns in stand-alone modules called aspects. Aspects can contain advice (code joined to specified points in the program) and inter-type declarations (structural members added to other classes). For example, a security module can include advice that performs a security check before accessing a bank account. The pointcut defines the times (join points) when one can access a bank account, and the code in the advice body defines how the security check is implemented. That way, both the check and the places can be maintained in one place. Further, a good pointcut can anticipate later program changes, so if another developer creates a new method to access the bank account, the advice will apply to the new method when it executes. So for the example above implementing logging in an aspect: One can think of AOP as a debugging tool or a user-level tool. Advice should be reserved for cases in which one cannot get the function changed (user level) or do not want to change the function in production code (debugging). == Join point models == The advice-related component of an aspect-oriented language defines a join point model (JPM). A JPM defines three things: When the advice can run. These are called join points because they are points in a running program where additional behavior can be usefully joined. A join point needs to be addressable and understandable by an ordinary programmer to be useful. It should also be stable across inconsequential program changes to maintain aspect stability. Many AOP implementations support method executions and field references as join points. A way to specify (or quantify) join points, called pointcuts. Pointcuts determine whether a given join point matches. Most useful pointcut languages use a syntax like the base language (for example, AspectJ uses Java signatures) and allow reuse through naming and combination. A means of specifying code to run at a join point. AspectJ calls this advice, and can run it before, after, and around join points. Some implementations also support defining a method in an aspect on another class. Join-point models can be compared based on the join points exposed, how join points are specified, the operations permitted at the join points, and the structural enhancements that can be expressed. === AspectJ's join-point model === === Other potential join point models === There are other kinds of JPMs. All advice languages can be defined in terms of their JPM. For example, a hypothetical aspect language for UML may have the following JPM: Join points are all model elements. Pointcuts are some Boolean expression combining the model elements. The means of affect at these points are a visualization of all the matched join points. === Inter-type declarations === Inter-type declarations provide a way to express cross-cutting concerns affecting the structure of modules. Also known as open classes and extension methods, this enables programmers to declare in one place members or parents of another class, typically to combine all the code related to a concern in one aspect. For example, if a programmer implemented the cross-cutting display-update concern using visitors, an inter-type declaration using the visitor pattern might look like this in AspectJ: This code snippet adds the acceptVisitor method to the Point class. Any structural additions are required to be compatible with the original class, so that clients of the existing class continue to operate, unless the AOP implementation can expect to control all clients at all times. == Implementation == AOP programs can affect other programs in two different ways, depending on the underlying languages and environments: a combined program is produced, valid in the original language and indistinguishable from an ordinary program to the ultimate interpreter the ultimate interpreter or environment is updated to understand and implement AOP features. The difficulty of changing environments means most implementations produce compatible combination programs through a type of program transformation known as weaving. An aspect weaver reads the aspect-oriented code and generates appropriate object-oriented code with the aspects integrated. The same AOP language can be implemented through a variety of weaving methods, so the semantics of a language should never be understood in terms of the weaving implementation. Only the speed of an implementation and its ease of deployment are affected by the method of combination used. Systems can implement source-level weaving using preprocessors (as C++ was implemented originally in CFront) that require access to program source files. However, Java's well-defined binary form enables bytecode weavers to work with any Java program in .class-file form. Bytecode weavers can be deployed during the build process or, if the weave model is per-class, during class loading. AspectJ started with source-level weaving in 2001, delivered a per-class bytecode weaver in 2002, and offered advanced load-time support after the integration of AspectWerkz in 2005. Any solution that combines programs at runtime must provide views that segregate them properly to maintain the programmer's segregated model. Java's bytecode support for multiple source files enables any debugger to step through a properly woven .class file in a source editor. However, some third-party decompilers cannot process woven code because they expect code produced by Javac rather than all supported bytecode forms (see also § Criticism, below). Deploy-time weaving offers another approach. This basically implies post-processing, but rather than patching the generated code, this weaving approach subclasses existing classes so that the modifications are introduced by method-overriding. The existing classes remain untouched, even at runtime, and all existing tools, such as debuggers and profilers, can be used during development. A similar approach has already proven itself in the implementation of many Java EE application servers, such as IBM's WebSphere. === Terminology === Standard terminology used in Aspect-oriented programming may include: Cross-cutting concerns Even though most classes in an object-oriented model will perform a single, specific function, they often share common, secondary requirements with other classes. For example, we may want to add logging to classes within the data-access layer and also to classes in the UI layer whenever a thread enters or exits a method. Further concerns can be related to security such as access control or information flow control. Even though each class has a very different primary functionality, the code needed to perform the secondary functionality is often identical. Advice This is the additional code that you want to apply to your existing model. In our example, this is the logging code that we want to apply whenever the thread enters or exits a method.: Pointcut This refers to the point of execution in the application at which cross-cutting concern needs to be applied. In our example, a pointcut is reached when the thread enters a method, and another pointcut is reached when the thread exits the method. Aspect The combination of the pointcut and the advice is termed an aspect. In the example above, we add a logging aspect to our application by defining a pointcut and giving the correct advice. == Comparison to other programming paradigms == Aspects emerged from object-oriented programming and reflective programming. AOP languages have functionality similar to, but more restricted than, metaobject protocols. Aspects relate closely to programming concepts like subjects, mixins, and delegation. Other ways to use aspect-oriented programming paradigms include Composition Filters and the hyperslices approach. Since at least the 1970s, developers have been using forms of interception and dispatch-patching that resemble some of the implementation methods for AOP, but these never had the semantics that the cross-cutting specifications provide in one place. Designers have considered alternative ways to achieve separation of code, such as C#'s partial types, but such approaches lack a quantification mechanism that allows reaching several join points of the code with one declarative statement. Though it may seem unrelated, in testing, the use of mocks or stubs requires the use of AOP techniques, such as around advice. Here the collaborating objects are for the purpose of the test, a cross-cutting concern. Thus, the various Mock Object frameworks provide these features. For example, a process invokes a service to get a balance amount. In the test of the process, it is unimportant where the amount comes from, but only that the process uses the balance according to the requirements. == Adoption issues == Programmers need to be able to read and understand code to prevent errors. Even with proper education, understanding cross-cutting concerns can be difficult without proper support for visualizing both static structure and the dynamic flow of a program. Starting in 2002, AspectJ began to provide IDE plug-ins to support the visualizing of cross-cutting concerns. Those features, as well as aspect code assist and refactoring, are now common. Given the power of AOP, making a logical mistake in expressing cross-cutting can lead to widespread program failure. Conversely, another programmer may change the join points in a program, such as by renaming or moving methods, in ways that the aspect writer did not anticipate and with unforeseen consequences. One advantage of modularizing cross-cutting concerns is enabling one programmer to easily affect the entire system. As a result, such problems manifest as a conflict over responsibility between two or more developers for a given failure. AOP can expedite solving these problems, as only the aspect must be changed. Without AOP, the corresponding problems can be much more spread out. == Criticism == The most basic criticism of the effect of AOP is that control flow is obscured, and that it is not only worse than the much-maligned GOTO statement, but is closely analogous to the joke COME FROM statement. The obliviousness of application, which is fundamental to many definitions of AOP (the code in question has no indication that an advice will be applied, which is specified instead in the pointcut), means that the advice is not visible, in contrast to an explicit method call. For example, compare the COME FROM program: with an AOP fragment with analogous semantics: Indeed, the pointcut may depend on runtime condition and thus not be statically deterministic. This can be mitigated but not solved by static analysis and IDE support showing which advices potentially match. General criticisms are that AOP purports to improve "both modularity and the structure of code", but some counter that it instead undermines these goals and impedes "independent development and understandability of programs". Specifically, quantification by pointcuts breaks modularity: "one must, in general, have whole-program knowledge to reason about the dynamic execution of an aspect-oriented program." Further, while its goals (modularizing cross-cutting concerns) are well understood, its actual definition is unclear and not clearly distinguished from other well-established techniques. Cross-cutting concerns potentially cross-cut each other, requiring some resolution mechanism, such as ordering. Indeed, aspects can apply to themselves, leading to problems such as the liar paradox. Technical criticisms include that the quantification of pointcuts (defining where advices are executed) is "extremely sensitive to changes in the program", which is known as the fragile pointcut problem. The problems with pointcuts are deemed intractable. If one replaces the quantification of pointcuts with explicit annotations, one obtains attribute-oriented programming instead, which is simply an explicit subroutine call and suffers the identical problem of scattering, which AOP was designed to solve. == Implementations == Many programming languages have implemented AOP, within the language, or as an external library, including: .NET framework languages (C#, Visual Basic (.NET) (VB.NET)) PostSharp is a commercial AOP implementation with a free but limited edition. Unity provides an API to facilitate proven practices in core areas of programming including data access, security, logging, exception handling and others. AspectDN is an AOP implementation allowing to weave the aspects directly on the .NET executable files. ActionScript Ada AutoHotkey C, C++ COBOL The Cocoa Objective-C frameworks ColdFusion Common Lisp Delphi Delphi Prism e (IEEE 1647) Emacs Lisp Groovy Haskell Java AspectJ JavaScript Logtalk Lua make Matlab ML Nemerle Perl PHP Prolog Python Racket Ruby Squeak Smalltalk UML 2.0 XML == See also == Distributed AOP Attribute grammar, a formalism that can be used for aspect-oriented programming on functional programming languages Programming paradigms Subject-oriented programming, an alternative to aspect-oriented programming Role-oriented programming, an alternative to aspect-oriented programming Predicate dispatch, an older alternative to aspect-oriented programming Executable UML Decorator pattern Domain-driven design == Notes and references == == Further reading == Kiczales, G.; Lamping, J.; Mendhekar, A.; Maeda, C.; Lopes, C.; Loingtier, J. M.; Irwin, J. (1997). Aspect-oriented programming (PDF). ECOOP'97. Proceedings of the 11th European Conference on Object-Oriented Programming. Lecture Notes in Computer Science (LNCS). Vol. 1241. pp. 220–242. CiteSeerX 10.1.1.115.8660. doi:10.1007/BFb0053381. ISBN 3-540-63089-9. The paper generally considered to be the authoritative reference for AOP. Robert E. Filman; Tzilla Elrad; Siobhán Clarke; Mehmet Aksit (2004). Aspect-Oriented Software Development. Addison-Wesley. ISBN 978-0-321-21976-3. Renaud Pawlak, Lionel Seinturier & Jean-Philippe Retaillé (2005). Foundations of AOP for J2EE Development. Apress. ISBN 978-1-59059-507-7. Laddad, Ramnivas (2003). AspectJ in Action: Practical Aspect-Oriented Programming. Manning. ISBN 978-1-930110-93-9. Jacobson, Ivar; Pan-Wei Ng (2005). Aspect-Oriented Software Development with Use Cases. Addison-Wesley. ISBN 978-0-321-26888-4. Aspect-oriented Software Development and PHP, Dmitry Sheiko, 2006 Siobhán Clarke & Elisa Baniassad (2005). Aspect-Oriented Analysis and Design: The Theme Approach. Addison-Wesley. ISBN 978-0-321-24674-5. Raghu Yedduladoddi (2009). Aspect Oriented Software Development: An Approach to Composing UML Design Models. VDM. ISBN 978-3-639-12084-4. "Adaptive Object-Oriented Programming Using Graph-Based Customization" – Lieberherr, Silva-Lepe, et al. – 1994 Zambrano Polo y La Borda, Arturo Federico (5 June 2013). "Addressing aspect interactions in an industrial setting: experiences, problems and solutions": 159. doi:10.35537/10915/35861. Retrieved 30 May 2014. {{cite journal}}: Cite journal requires |journal= (help) Wijesuriya, Viraj Brian (2016-08-30) Aspect Oriented Development, Lecture Notes, University of Colombo School of Computing, Sri Lanka Groves, Matthew D. (2013). AOP in .NET. Manning. ISBN 9781617291142. == External links == Eric Bodden's list of AOP tools in .NET framework Aspect-Oriented Software Development, annual conference on AOP AspectJ Programming Guide The AspectBench Compiler for AspectJ, another Java implementation Series of IBM developerWorks articles on AOP Laddad, Ramnivas (18 January 2002). "I want my AOP!, Part 1". JavaWorld. Retrieved 20 July 2020. A detailed series of articles on basics of aspect-oriented programming and AspectJ What is Aspect-Oriented Programming?, introduction with RemObjects Taco Constraint-Specification Aspect Weaver Aspect- vs. Object-Oriented Programming: Which Technique, When? Archived 15 April 2021 at the Wayback Machine Gregor Kiczales, Professor of Computer Science, explaining AOP, video 57 min. Aspect Oriented Programming in COBOL Archived 2008-12-17 at the Wayback Machine Aspect-Oriented Programming in Java with Spring Framework Wiki dedicated to AOP methods on.NET Early Aspects for Business Process Modeling (An Aspect Oriented Language for BPMN) Spring AOP and AspectJ Introduction AOSD Graduate Course at Bilkent University Introduction to AOP – Software Engineering Radio Podcast Episode 106 An Objective-C implementation of AOP by Szilveszter Molnar Aspect-Oriented programming for iOS and OS X by Manuel Gebele DevExpress MVVM Framework. Introduction to POCO ViewModels
https://en.wikipedia.org/wiki/Aspect-oriented_programming
The Elements of Programming Style, by Brian W. Kernighan and P. J. Plauger, is a study of programming style, advocating the notion that computer programs should be written not only to satisfy the compiler or personal programming "style", but also for "readability" by humans, specifically software maintenance engineers, programmers and technical writers. It was originally published in 1974. The book pays explicit homage, in title and tone, to The Elements of Style, by Strunk & White and is considered a practical template promoting Edsger Dijkstra's structured programming discussions. It has been influential and has spawned a series of similar texts tailored to individual languages, such as The Elements of C Programming Style, The Elements of C# Style, The Elements of Java(TM) Style, The Elements of MATLAB Style, etc. The book is built on short examples from actual, published programs in programming textbooks. This results in a practical treatment rather than an abstract or academic discussion. The style is diplomatic and generally sympathetic in its criticism, and unabashedly honest as well— some of the examples with which it finds fault are from the authors' own work (one example in the second edition is from the first edition). == Lessons == Its lessons are summarized at the end of each section in pithy maxims, such as "Let the machine do the dirty work": Write clearly – don't be too clever. Say what you mean, simply and directly. Use library functions whenever feasible. Avoid too many temporary variables. Write clearly – don't sacrifice clarity for efficiency. Let the machine do the dirty work. Replace repetitive expressions by calls to common functions. Parenthesize to avoid ambiguity. Choose variable names that won't be confused. Avoid unnecessary branches. If a logical expression is hard to understand, try transforming it. Choose a data representation that makes the program simple. Write first in easy-to-understand pseudo language; then translate into whatever language you have to use. Modularize. Use procedures and functions. Avoid gotos completely if you can keep the program readable. Don't patch bad code – rewrite it. Write and test a big program in small pieces. Use recursive procedures for recursively-defined data structures. Test input for plausibility and validity. Make sure input doesn't violate the limits of the program. Terminate input by end-of-file marker, not by count. Identify bad input; recover if possible. Make input easy to prepare and output self-explanatory. Use uniform input formats. Make input easy to proofread. Use self-identifying input. Allow defaults. Echo both on output. Make sure all variables are initialized before use. Don't stop at one bug. Use debugging compilers. Watch out for off-by-one errors. Take care to branch the right way on equality. Be careful if a loop exits to the same place from the middle and the bottom. Make sure your code does "nothing" gracefully. Test programs at their boundary values. Check some answers by hand. 10.0 times 0.1 is hardly ever 1.0. 7/8 is zero while 7.0/8.0 is not zero. Don't compare floating point numbers solely for equality. Make it right before you make it faster. Make it fail-safe before you make it faster. Make it clear before you make it faster. Don't sacrifice clarity for small gains in efficiency. Let your compiler do the simple optimizations. Don't strain to re-use code; reorganize instead. Make sure special cases are truly special. Keep it simple to make it faster. Don't diddle code to make it faster – find a better algorithm. Instrument your programs. Measure before making efficiency changes. Make sure comments and code agree. Don't just echo the code with comments – make every comment count. Don't comment bad code – rewrite it. Use variable names that mean something. Use statement labels that mean something. Format a program to help the reader understand it. Document your data layouts. Don't over-comment. Modern readers may find it a shortcoming that its examples use older procedural programming languages (Fortran and PL/I) that are quite different from those popular today. Few of today's popular languages had been invented when this book was written. However, many of the book's points that generally concern stylistic and structural issues transcend the details of particular languages. == Reception == Kilobaud Microcomputing stated that "If you intend to write programs to be used by other people, then you should read this book. If you expect to become a professional programmer, this book is mandatory reading". == References == B. W. Kernighan and P. J. Plauger, The Elements of Programming Style, McGraw-Hill, New York, 1974. ISBN 0-07-034199-0 B. W. Kernighan and P. J. Plauger, The Elements of Programming Style 2nd Edition, McGraw Hill, New York, 1978. ISBN 0-07-034207-5 == External links == J. Plauger selected quotes from The Elements of Programming Style Elements of Programming Style – 2009 Brian Kernighan talk at Princeton on YouTube
https://en.wikipedia.org/wiki/The_Elements_of_Programming_Style
In programming language theory, semantics is the rigorous mathematical study of the meaning of programming languages. Semantics assigns computational meaning to valid strings in a programming language syntax. It is closely related to, and often crosses over with, the semantics of mathematical proofs. Semantics describes the processes a computer follows when executing a program in that specific language. This can be done by describing the relationship between the input and output of a program, or giving an explanation of how the program will be executed on a certain platform, thereby creating a model of computation. == History == In 1967, Robert W. Floyd published the paper Assigning meanings to programs; his chief aim was "a rigorous standard for proofs about computer programs, including proofs of correctness, equivalence, and termination". Floyd further wrote: A semantic definition of a programming language, in our approach, is founded on a syntactic definition. It must specify which of the phrases in a syntactically correct program represent commands, and what conditions must be imposed on an interpretation in the neighborhood of each command. In 1969, Tony Hoare published a paper on Hoare logic seeded by Floyd's ideas, now sometimes collectively called axiomatic semantics. In the 1970s, the terms operational semantics and denotational semantics emerged. == Overview == The field of formal semantics encompasses all of the following: The definition of semantic models The relations between different semantic models The relations between different approaches to meaning The relation between computation and the underlying mathematical structures from fields such as logic, set theory, model theory, category theory, etc. It has close links with other areas of computer science such as programming language design, type theory, compilers and interpreters, program verification and model checking. == Approaches == There are many approaches to formal semantics; these belong to three major classes: Denotational semantics, whereby each phrase in the language is interpreted as a denotation, i.e. a conceptual meaning that can be thought of abstractly. Such denotations are often mathematical objects inhabiting a mathematical space, but it is not a requirement that they should be so. As a practical necessity, denotations are described using some form of mathematical notation, which can in turn be formalized as a denotational metalanguage. For example, denotational semantics of functional languages often translate the language into domain theory. Denotational semantic descriptions can also serve as compositional translations from a programming language into the denotational metalanguage and used as a basis for designing compilers. Operational semantics, whereby the execution of the language is described directly (rather than by translation). Operational semantics loosely corresponds to interpretation, although again the "implementation language" of the interpreter is generally a mathematical formalism. Operational semantics may define an abstract machine (such as the SECD machine), and give meaning to phrases by describing the transitions they induce on states of the machine. Alternatively, as with the pure lambda calculus, operational semantics can be defined via syntactic transformations on phrases of the language itself; Axiomatic semantics, whereby one gives meaning to phrases by describing the axioms that apply to them. Axiomatic semantics makes no distinction between a phrase's meaning and the logical formulas that describe it; its meaning is exactly what can be proven about it in some logic. The canonical example of axiomatic semantics is Hoare logic. Apart from the choice between denotational, operational, or axiomatic approaches, most variations in formal semantic systems arise from the choice of supporting mathematical formalism. == Variations == Some variations of formal semantics include the following: Action semantics is an approach that tries to modularize denotational semantics, splitting the formalization process in two layers (macro and microsemantics) and predefining three semantic entities (actions, data and yielders) to simplify the specification; Algebraic semantics is a form of axiomatic semantics based on algebraic laws for describing and reasoning about program semantics in a formal manner. It also supports denotational semantics and operational semantics; Attribute grammars define systems that systematically compute "metadata" (called attributes) for the various cases of the language's syntax. Attribute grammars can be understood as a denotational semantics where the target language is simply the original language enriched with attribute annotations. Aside from formal semantics, attribute grammars have also been used for code generation in compilers, and to augment regular or context-free grammars with context-sensitive conditions; Categorical (or "functorial") semantics uses category theory as the core mathematical formalism. Categorical semantics is usually proven to correspond to some axiomatic semantics that gives a syntactic presentation of the categorical structures. Also, denotational semantics are often instances of a general categorical semantics; Concurrency semantics is a catch-all term for any formal semantics that describes concurrent computations. Historically important concurrent formalisms have included the actor model and process calculi; Game semantics uses a metaphor inspired by game theory; Predicate transformer semantics, developed by Edsger W. Dijkstra, describes the meaning of a program fragment as the function transforming a postcondition to the precondition needed to establish it. == Describing relationships == For a variety of reasons, one might wish to describe the relationships between different formal semantics. For example: To prove that a particular operational semantics for a language satisfies the logical formulas of an axiomatic semantics for that language. Such a proof demonstrates that it is "sound" to reason about a particular (operational) interpretation strategy using a particular (axiomatic) proof system. To prove that operational semantics over a high-level machine is related by a simulation with the semantics over a low-level machine, whereby the low-level abstract machine contains more primitive operations than the high-level abstract machine definition of a given language. Such a proof demonstrates that the low-level machine "faithfully implements" the high-level machine. It is also possible to relate multiple semantics through abstractions via the theory of abstract interpretation. == See also == Computational semantics Formal semantics (logic) Formal semantics (linguistics) Ontology Ontology (information science) Semantic equivalence Semantic technology == References == == Further reading == Textbooks == External links == Aaby, Anthony (2004). Introduction to Programming Languages. Archived from the original on 2015-06-19. Semantics.
https://en.wikipedia.org/wiki/Semantics_(computer_science)
In computer programming, a sigil () is a symbol affixed to a variable name, showing the variable's datatype or scope, usually a prefix, as in $foo, where $ is the sigil. Sigil, from the Latin sigillum, meaning a "little sign", means a sign or image supposedly having magical power. Sigils can be used to separate and demarcate namespaces that possess different properties or behaviors. == Historical context == The use of sigils was popularized by the BASIC programming language. The best known example of a sigil in BASIC is the dollar sign ("$") appended to the names of all strings. Many BASIC dialects use other sigils (like "%") to denote integers and floating-point numbers and their precision, and sometimes other types as well. Larry Wall adopted shell scripting's use of sigils for his Perl programming language. In Perl, the sigils do not specify fine-grained data types like strings and integers, but the more general categories of scalars (using a prefixed "$"), arrays (using "@"), hashes (using "%"), and subroutines (using "&"). Raku also uses secondary sigils, or twigils, to indicate the scope of variables. Prominent examples of twigils in Raku include "^" (caret), used with self-declared formal parameters ("placeholder variables"), and ".", used with object attribute accessors (i.e., instance variables). == Sigil use in some languages == In CLIPS, scalar variables are prefixed with a "?" sigil, while multifield (e.g., a 1-level list) variables are prefixed with "$?". In Common Lisp, special variables (with dynamic scope) are typically surrounded with * in what is called the "earmuff convention". While this is only convention, and not enforced, the language itself adopts the practice (e.g., *standard-output*). Similarly, some programmers surround constants with +. In CycL, variables are prefixed with a "?" sigil. Similarly, constant names are prefixed with "#$" (pronounced "hash-dollar"). In Elixir, sigils are provided via the "~" symbol, followed by a letter to denote the type of sigil, and then delimiters. For example, ~r(foo) is a regular expression of "foo". Other sigils include ~s for strings and ~D for dates. Programmers can also create their own sigils. In the esoteric INTERCAL, variables are a 16-bit integer identifier prefixed with either "." (called "spot") for 16-bit values, ":" (called "twospot") for 32-bit values, "," ("tail") for arrays of 16-bit values and ";" ("hybrid") for arrays of 32-bit values. The later CLC-Intercal added "@" ("whirlpool") for a variable that can contain no value (used for classes) and "_" used to store a modified compiler. In MAPPER (aka BIS), named variables are prefixed with "<" and suffixed with ">" because strings or character values do not require quotes. In mIRC script, identifiers have a "$" sigil, while all variables have a "%" prefixed (regardless of local or global variables or data type). Binary variables are prefixed by an "&". In the MUMPS programming language, "$" precedes intrinsic function names and "special variable names" (built-in variables for accessing the execution state). "$Z" precedes non-standard intrinsic function names. "$$" precedes extrinsic function names. Routines (used for procedures, subroutines, functions) and global variables (database storage) are prefixed by a caret (^). The last global variable subtree may be referenced indirectly by a caret and the last subscript; this is referred to as a "naked reference". System-wide routines and global variables (stored in certain shared database(s)) are prefixed with ^%; these are referred to as "percent routines" and "percent globals". In Objective-C, string literals preceded with "@" are instances of the object type NSString or, since clang v3.1 / LLVM v4.0, NSNumber, NSArray or NSDictionary. The prefix @ is also used on the keywords interface, implementation, and end to express the structure of class definitions. Within class declarations and definitions as well, a prefix of - is used to indicate member methods and variables, while prefix + indicates class elements. In the PHP language, which was largely inspired by Perl, "$" precedes any variable name. Names not prefixed by this are considered constants, functions or class names (or interface or trait names, which share the same namespace as classes). PILOT uses "$" for buffers (string variables), "#" for integer variables, and "*" for program labels. Python uses a "__" prefix, called dunder, for "private" attributes. In Ruby, ordinary variables lack sigils, but "$" is prefixed to global variables, "@" is prefixed to instance variables, and "@@" is prefixed to class variables. Ruby also allows (strictly conventional) suffix sigils: "?" indicates a predicate method returning a boolean or a truthy or falsy value, and "!" indicates that the method may have a potentially unexpected effect and needs to be handled with care. In Scheme, by convention, the names of procedures that always return a boolean value usually end in "?". Likewise, the names of procedures that store values into parts of previously allocated Scheme objects (such as pairs, vectors, or strings) usually end in "!". Standard ML uses the prefix sigil "'" on a variable that refers to a type. If the sigil is doubled, it refers to a type for which equality is defined. The "'" character may also appear within or at the end of a variable, in which case it has no special meaning. In Transact-SQL, "@" precedes a local variable or parameter name. System functions (previously known as global variables) are distinguished by a "@@" prefix. The scope of temporary tables is indicated by the prefix "#" designating local and "##" designating global. In Windows PowerShell, which was partly inspired by Unix shells and Perl, variable names are prefixed by the "$" sigil. In XSLT, variables and parameters have a leading "$" sigil on use, although when defined in <xsl:param> or <xsl:variable> with the "name" attribute, the sigil is not included. Related to XSLT, XQuery uses the "$" sigil form both in definition and in use. In MEL, variable names are prefixed by "$" to distinguish them from functions, commands, and other identifiers. == Similar phenomena == === Shell scripting variables === In Unix shell scripting and in utilities such as Makefiles, the "$" is a unary operator that translates the name of a variable into its contents. While this may seem similar to a sigil, it is properly a unary operator for lexical indirection, similar to the * dereference operator for pointers in C, as noticeable from the fact that the dollar sign is omitted when assigning to a variable. === Identifier conventions === In Fortran, sigils are not used, but all variables starting with the letters I, J, K, L, M and N are integers by default. Fortran documentation refers to this as "implicit typing". Explicit typing is also available to allow any variable to be declared with any type. Various programming languages including Prolog, Haskell, Ruby and Go treat identifiers beginning with a capital letter differently from identifiers beginning with a small letter, a practice related to the use of sigils. === Stropping === Actually a form of stropping, the use of many languages in Microsoft's .NET Common Language Infrastructure (CLI) requires a way to use variables in a different language that may be keywords in a calling language. This is sometimes done by prefixes. In C#, any variable names may be prefixed with "@". This is mainly used to allow the use of variable names that would otherwise conflict with keywords. The same is achieved in VB.Net by enclosing the name in square brackets, as in [end]. The "@" prefix can also be applied to string literals; see literal affixes below. === Hungarian notation === Related to sigils is Hungarian notation, a naming convention for variables that specifies variable type by attaching certain alphabetic prefixes to the variable name. Unlike sigils, however, Hungarian notation provides no information to the compiler; as such, explicit types must be redundantly specified for the variables (unless using a language with type inference). As most standard compilers do not enforce use of the prefixes, this permits omission and also makes code prone to confusion due to accidental erroneous use. === Literal affixes === While sigils are applied to names (identifiers), similar prefixes and suffixes can be applied to literals, notably integer literals and string literals, specifying either how the literal should be evaluated, or what data type it is. For example, 0x10ULL evaluates to the value 16 as an unsigned long long integer in C++: the 0x prefix indicates hexadecimal, while the suffix ULL indicates unsigned long long. Similarly, prefixes are often used to indicate a raw string, such as r"C:\Windows" in Python, which represents the string with value C:\Windows; as an escaped string this would be written as "C:\\Windows". As this affects the semantics (value) of a literal, rather than the syntax or semantics of an identifier (name), this is neither stropping (identifier syntax) nor a sigil (identifier semantics), but it is syntactically similar. === Java annotations === Compare Java annotations such as @Override and @Deprecated. === Confusion === In some cases the same syntax can be used for distinct purposes, which can cause confusion. For example, in C#, the "@" prefix can be used either for stropping (to allow reserved words to be used as identifiers), or as a prefix to a literal (to indicate a raw string); in this case neither use is a sigil, as it affects the syntax of identifiers or the semantics of literals, not the semantics of identifiers. == See also == Delimiter Source code Token == References ==
https://en.wikipedia.org/wiki/Sigil_(computer_programming)
The following is a list of programs and films currently and formerly broadcast on Great American Family. The list also includes programming aired when the network was known as Great American Country. == Original films == === 2021 === === 2022 === (AH) Autumn Harvest === 2023 === === 2024 === === 2025 === == Current programming == === Original programming === ==== Drama ==== When Hope Calls (season 2; December 18, 2021, acquired from Crown Media) County Rescue === Syndicated programming === Shows currently broadcast on GAC Family include several sitcoms under an agreement with 20th Television, CBS Media Ventures, NBCUniversal Syndication Studios, Sony Pictures Television and Warner Bros. Television: ==== Sitcoms ==== Bewitched (Sony Pictures Television) The Facts of Life (Sony Pictures Television) Father Knows Best (Sony Pictures Television) Full House (Warner Bros. Television) Fuller House (Warner Bros. Television) Hazel (Sony Pictures Television) I Dream of Jeannie (Sony Pictures Television) Silver Spoons (Sony Pictures Television) The Andy Griffith Show (CBS Studios) Who's the Boss? (Sony Pictures Television) ==== Dramas ==== Columbo (Universal Television) Little House on the Prairie (Universal Television) Murder, She Wrote (Universal Television) Perry Mason (CBS Studios) The Lone Ranger (Universal Television) Wagon Train (Universal Television) == Upcoming programming == === In Development === == Former programming == === As Great American Country === All-American Amusement Parks (2014) Aloha Builds Barn Hunters Barnwood Builders Behind the Scenes (1997–2004) Betty White's Smartest Animals in America (2015) Big Wheels of Country (2003–2005) Carnival Eats Celebrity Kitchen with Lorianne Crook (2003–2005) Celebrity Motor Homes Country Music Across America (2003–2008) Country Requests Live (2000–2005) Crook & Chase (2003–2005) Design on a Dime Endless Yard Sale Showdown Farm Kings Fast Forward (1997–2005) GAC Classic (2001–2006) GAC Late Shift GAC Nights GAC Outdoor Country Gaither Gospel Hour Great American Roadhouse (2002–2003) Growing Up Gator Headline Country The Hitmen of Music Row (2007) Hot Country Nights (2004) I Brake for Yard Sales Inside Country (1998–2000) Into the Circle The Jennie Garth Project Junk Gypsies Kimberly's Simply Southern KingBilly (2008) Lakefront Bargain Hunt Living Countryfied Log Cabin Living Made in America (2003–2004) Main Street Videos Master Series Moving Country My Music Mix (2005–2009) Next GAC Star (2008) Offstage with Lorianne Crook (2005–2007) Oh That Dog of Mine! (1999) On the Edge of Country (1997–2008) Our Song (2009) On the Streets Opry Live Patriotic Country (2004) Pick a Puppy Positively GAC The Road Hammers (2008) Soundstage Superstar Sessions Tiny House, Big Living Top 15 Country Countdown (1997–2001) Top 20 Country Countdown (2004–2018) Top 50 Videos of the Year Tori and Dean: Cabin Fever Ultimate Sportman's Lodge Wake Up Call Videos The Willis Clan Wrangler National Finals Rodeo The Year === As GAC Family/Great American Family === Bonanza (CBS Studios) The Beverly Hillbillies (CBS Studios) == Specials == === As GAC Family/Great American Family === Welcome to Great American Christmas (2021) When Hope Calls: Hearties Christmas Present (2021) Great American Christmas in Kentucky (2022) Great American Rescue Bowl (2023) == References == == External links == Great American Family
https://en.wikipedia.org/wiki/List_of_programs_and_films_broadcast_by_Great_American_Family
COBOL (; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural, and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. Many large financial institutions were developing new systems in the language as late as 2006, but most programming in COBOL today is purely to maintain existing applications. Programs are being moved to new platforms, rewritten in modern languages, or replaced with other software. COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC, designed by Grace Hopper. It was created as part of a U.S. Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Defense Department promptly pressured computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has been revised five times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2023. COBOL statements have prose syntax such as MOVE x TO y, which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words compared to the succinct and mathematically inspired syntax of other languages. The COBOL code is split into four divisions (identification, environment, data, and procedure), containing a rigid hierarchy of sections, paragraphs, and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions, and just one class. COBOL has been criticized for its verbosity, design process, and poor support for structured programming. These weaknesses often result in monolithic programs that are hard to comprehend as a whole, despite their local readability. For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years, many COBOL operations have been moved to cloud computing. == History and specification == === Background === In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost US$600,000. At a time when new programming languages were proliferating, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster. On 8 April 1959, Mary K. Hawes, a computer scientist at Burroughs Corporation, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages. Representatives included Grace Hopper (inventor of the English-like data processing language FLOW-MATIC), Jean Sammet, and Saul Gorn. At the April meeting, the group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they "thoroughly understood" the DoD's problems. The DoD operated 225 computers, had 175 more on order, and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs, and ease modernization. Charles Phillips agreed to sponsor the meeting, and tasked the delegation with drafting the agenda. === COBOL 60 === On 28 and 29 May 1959, a meeting was held at the Pentagon to discuss the creation of a common programming language for business (exactly one year after the Zürich ALGOL 58 meeting). It was attended by 41 people and was chaired by Phillips. The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs. Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program, and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent, and be easy to use, even at the expense of power. The meeting resulted in the creation of a steering committee and short, intermediate, and long-range committees. The short-range committee was given until September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees. Their official mission, however, was to identify the strengths and weaknesses of existing programming languages; it did not explicitly direct them to create a new language. The deadline was met with disbelief by the short-range committee. One member, Betty Holberton, described the three-month deadline as "gross optimism" and doubted that the language really would be a stopgap. The steering committee met on 4 June and agreed to name the entire activity the Committee on Data Systems Languages, or CODASYL, and to form an executive committee. The short-range committee members represented six computer manufacturers and three government agencies. The computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The government agencies were the U.S. Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology). The committee was chaired by Joseph Wegstein of the U.S. National Bureau of Standards. Work began by investigating data descriptions, statements, existing applications, and user experiences. The committee mainly examined the FLOW-MATIC, AIMACO, and COMTRAN programming languages. The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes. FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee. FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands, and the separation of data descriptions and instructions. Hopper is sometimes called "the mother of COBOL" or "the grandmother of COBOL", although Jean Sammet, a lead designer of COBOL, said Hopper "was not the mother, creator, or developer of Cobol." IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC by a short-range committee made up of colleagues of Grace Hopper. Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process, and Jean Sammet said in 1981 that there had been a "strong anti-IBM bias" from some committee members (herself included). In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English. In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out.". Features from COMTRAN incorporated into COBOL included formulas, the PICTURE clause, an improved IF statement, which obviated the need for GO TOs, and a more robust file management system. The usefulness of the committee's work was a subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple. Controversial features included those some considered useless or too advanced for data processing users. Such features included Boolean expressions, formulas, and table subscripts (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time), and functions (thought of as purely mathematical and of no use in data processing). The specifications were presented to the executive committee on 4 September. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions," and Bob Bemer later described them as a "hodgepodge." The committee was given until December to improve it. At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language), and "COCOSYL" (Common Computer Systems Language). It is unclear who coined the name "COBOL", although Bob Bemer later claimed it had been his suggestion. In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it. This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste. It soon became apparent that the committee was too large to make any further progress quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure. A subcommittee was formed to analyze existing languages and was made up of six individuals: William Selden and Gertrude Tierney of IBM, Howard Bromberg and Howard Discount of RCA, Vernon Reeves and Jean E. Sammet of Sylvania Electric Products. The subcommittee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification. The specifications were approved by the executive committee on 8 January 1960, and sent to the government printing office, which printed them as COBOL 60. The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers. The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications. During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register, and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL. Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved. The relative influence of the languages that were used is still indicated in the recommended advisory printed in all COBOL reference manuals: COBOL is an industry language and is not the property of any company or group of companies, or of any organization or group of organizations. No warranty, expressed or implied, is made by any contributor or by the CODASYL COBOL Committee as to the accuracy and functioning of the programming system and language. Moreover, no responsibility is assumed by any contributor or by the committee in connection therewith. The authors and copyright holders of the copyrighted material used herein are as follows: FLOW-MATIC (trademark of Unisys Corporation), Programming for the UNIVAC (R) I and II, Data Automation Systems, copyrighted 1958, 1959, by Unisys Corporation; IBM Commercial Translator Form No. F28-8013, copyrighted 1959 by IBM; FACT, DSI 27A5260-2760, copyrighted 1960 by Minneapolis-Honeywell. They have specifically authorized the use of this material, in whole or in part, in the COBOL specifications. Such authorization extends to the reproduction and use of COBOL specifications in programming manuals or similar publications. === COBOL-61 to COBOL-65 === Many logical flaws were found in COBOL 60, leading General Electric's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee performed a total cleanup, and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained. Early COBOL compilers were primitive and slow. COBOL is a difficult language to write a compiler for, due to the large syntax and many optional elements within syntactic constructs, as well as the need to generate efficient code for a language with many possible data representations, implicit type conversions, and necessary set-ups for I/O operations. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91. In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease. The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables. === COBOL-68 === Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced USA Standard COBOL X3.23 in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972. === COBOL-74 === By 1970, COBOL had become the most widely used programming language in the world. Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970, and 1973, including changes such as new inter-program communication, debugging, and file merging facilities, as well as improved string handling and library inclusion features. Although CODASYL was independent of the ANSI committee, the CODASYL Journal of Development was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee. The Programming Language Committee was not well-known, however. The vice president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It also lacked the funds to make public documents, such as minutes of meetings and change proposals, freely available. In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the DELETE statement and the segmentation module. Deleted features included the NOTE statement, the EXAMINE statement (which was replaced by INSPECT), and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL but was reinstated before the standard was published. ISO later adopted the updated standard in 1978. === COBOL-85 === In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-president of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources". Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user". During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard. ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI made most of the proposals. In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, and noted that conversion of COBOL-74 programs posed few problems. The new EVALUATE statement and inline PERFORM were particularly well received and improved productivity, thanks to simplified control flow and debugging. The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed. In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985. Sixty features were changed or deprecated and 115 were added, such as: Scope terminators (END-IF, END-PERFORM, END-READ, etc.) Nested subprograms CONTINUE, a no-operation statement EVALUATE, a switch statement INITIALIZE, a statement that can set groups of data to their default values Inline PERFORM loop bodies – previously, loop bodies had to be specified in a separate procedure Reference modification, which allows access to substrings I/O status codes. The new standard was adopted by all national standard bodies, including ANSI. Two amendments followed in 1989 and 1993. The first amendment introduced intrinsic functions and the other provided corrections. === COBOL 2002 and object-oriented COBOL === In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs. In the early 1990s, work began on adding object-oriented programming in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk. The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002. Fujitsu/GTSoftware, Micro Focus introduced object-oriented COBOL compilers targeting the .NET Framework. There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85. These other features included: Free-form code User-defined functions Recursion Locale-based processing Support for extended character sets such as Unicode Floating-point and binary data types (until then, binary items were truncated based on their declaration's base-10 specification) Portable arithmetic results Bit and Boolean data types Pointers and syntax for getting and freeing storage The SCREEN SECTION for text-based user interfaces The VALIDATE facility Improved interoperability with other programming languages and framework environments such as .NET and Java. Three corrigenda were published for the standard: two in 2006 and one in 2009. === COBOL 2014 === Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL. COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced. COBOL 2014 includes the following changes: Portable arithmetic results have been replaced by IEEE 754 data types Major features have been made optional, such as the VALIDATE facility, the report writer and the screen-handling facility Method overloading Dynamic capacity tables (a feature dropped from the draft of COBOL 2002) === COBOL 2023 === The COBOL 2023 standard added a few new features: Asynchronous messaging syntax using the SEND and RECEIVE statements A transaction processing facility with COMMIT and ROLLBACK XOR logical operator The CONTINUE statement can be extended as to pause the program for a specified duration A DELETE FILE statement LINE SEQUENTIAL file organization Defined infinite looping with PERFORM UNTIL EXIT SUBSTITUTE intrinsic function allowing for substring substitution of different length CONVERT function for base-conversion Boolean shifting operators There is as yet no known complete implementation of this standard. == Legacy == COBOL programs are used globally in governments and various industries including retail, travel, finance, and healthcare. Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. COBOL currently runs on diverse operating systems such as z/OS, z/VSE, VME, Unix, NonStop OS, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually. As of 2020, COBOL ran background processes 95% of the time a credit or debit card was swiped. === Y2K === Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. Some studies attribute as much as "24% of Y2K software repair costs to Cobol". After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of COBOL in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted". === Modernization efforts === In 2006 and 2012, Computerworld surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said that they would do so if not for the expense of rewriting legacy code. Alternatively, some businesses have migrated their COBOL programs from mainframes to cheaper, faster hardware. By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite COBOL systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated. Several banks have undertaken multi-year COBOL modernization efforts, sometimes resulting in widespread service disruptions that result in fines. During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process was put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act. == Features == === Syntax === COBOL has an English-like syntax, which is used to describe nearly everything in COBOL programs. For example, a condition can be expressed as x IS GREATER THAN y or more concisely as x GREATER y or x > y. More complex conditions can be abbreviated by removing repeated conditions and variables. For example, a > b AND a > c OR a = d can be shortened to a > b AND c OR = d. To support this syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more grammatically appropriate statements and clauses; e.g., the IN and OF keywords can be used interchangeably, as can TIME and TIMES, and VALUE and VALUES. Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see § PICTURE clause) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. 12) and strings (e.g. 'Hello!'). Separators include the space character and commas and semi-colons followed by a space. A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs. ==== Metalanguage ==== COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. As an example, consider the following description of an ADD statement: This description permits the following variants: === Code format === The height of COBOL's popularity coincided with the era of keypunch machines and punched cards. The program itself was written onto punched cards, then read in and compiled, and the data fed into the program was sometimes on cards as well. COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were: In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column. COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator. === Identification division === The identification division identifies the following code entity and contains the definition of a class or interface. ==== Object-oriented programming ==== Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the INVOKE statement, which acts similarly to CALL, or through inline method invocation, which is analogous to using functions. COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a PROPERTY clause, which leaves external code no way to access it. Method overloading was added in COBOL 2014. === Environment division === The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information. ==== Files ==== COBOL supports three file formats, or organizations: sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, alternate, record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. Other implementations are Record Management Services on OpenVMS and Enscribe on HPE NonStop (Tandem). Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access. A common non-standard extension is the line sequential organization, used to process text files. Records in a file are terminated by a newline and may be of varying length. === Data division === The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces. ==== Aggregated data ==== Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called records. Items that have subordinate aggregate data are called group items; those that do not are called elementary items. Level-numbers used to describe standard data items are between 1 and 49. In the above example, elementary item num and group item the-date are subordinate to the record some-record, while elementary items the-year, the-month, and the-day are part of the group item the-date. Subordinate items can be disambiguated with the IN (or OF) keyword. For example, consider the example code above along with the following example: The names the-year, the-month, and the-day are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the sale-date group, the programmer would use the-year IN sale-date (or the equivalent the-year OF sale-date). This syntax is similar to the "dot notation" supported by most contemporary languages. ==== Other data levels ==== A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated RENAMES clause, is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use. A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, property-name and sales-region, which are non-group data items that are independent of (not subordinate to) any other data items: An 88 level-number declares a condition name (a so-called 88-level) which is true when its parent data item contains one of the values specified in its VALUE clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the wage-type data item. When the data item contains a value of 'H', the condition-name wage-is-hourly is true, whereas when it contains a value of 'S' or 'Y', the condition-name wage-is-yearly is true. If the data item contains some other value, both of the condition-names are false. ==== Data types ==== Standard COBOL provides the following data types: Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type. ===== PICTURE clause ===== A PICTURE (or PIC) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a 9 indicates a decimal digit, and an S indicates that the item is signed. Other picture characters (called insertion and editing characters) specify how an item should be formatted. For example, a series of + characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a + to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, 9(7) is equivalent to 9999999. Picture specifications containing only digit (9) and sign (S) characters define purely numeric data items, while picture specifications containing alphabetic (A) or alphanumeric (X) characters define alphanumeric data items. The presence of other formatting characters define edited numeric or edited alphanumeric data items. ===== USAGE clause ===== The USAGE clause declares the format in which data is stored. Depending on the data type, it can either complement or be used instead of a PICTURE clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are: Binary, where a minimum size is either specified by the PICTURE clause or by a USAGE clause such as BINARY-LONG USAGE COMPUTATIONAL, where data may be stored in whatever format the implementation provides; often equivalent to USAGE BINARY USAGE DISPLAY, the default format, where data is stored as a string Floating-point, in either an implementation-dependent format or according to IEEE 754 USAGE NATIONAL, where data is stored as a string using an extended character set USAGE PACKED-DECIMAL, where data is stored in the smallest possible decimal format (typically packed binary-coded decimal) ==== Report writer ==== The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings. Reports are associated with report files, which are files which may only be written to through report writer statements. Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical control breaks. Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records: The above report description describes the following layout: Sales Report Page 1 Seller: Howard Bromberg Sales on 10/12/2008 were $1000.00 Sales on 12/12/2008 were $0.00 Sales on 13/12/2008 were $31.47 INVALID RECORD: Howard Bromberg XXXXYY Seller: Howard Discount ... Sales Report Page 12 Sales on 08/05/2014 were $543.98 INVALID RECORD: William Selden 12052014FOOFOO Sales on 30/05/2014 were $0.00 Four statements control the report writer: INITIATE, which prepares the report writer for printing; GENERATE, which prints a report group; SUPPRESS, which suppresses the printing of a report group; and TERMINATE, which terminates report processing. For the above sales report example, the procedure division might look like this: Use of the Report Writer facility tends to vary considerably; some organizations use it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime. === Procedure division === ==== Procedures ==== The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections. Execution goes down through the procedures of a program until it is terminated. To use procedures as subroutines, the PERFORM verb is used. A PERFORM statement somewhat resembles a procedure call in a newer languages in the sense that execution returns to the code following the PERFORM statement at the end of the called code; however, it does not provide a mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like PERFORM subroutine, then control returns at the end of the called procedure. However, PERFORM is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the PERFORM sub-1 THRU sub-n construct: The output of this program will be: "A A B C". PERFORM also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being PERFORM'ed may execute a PERFORM statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a PERFORM invocation that was called earlier but has not yet completed, the COBOL 2002 standard stipulates that the behavior is undefined. The reason is that COBOL, rather than a "return address", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialized to the start address of the procedure that comes next in the program text so that, if no PERFORM statements happen, control flows from top to bottom through the program. But when a PERFORM statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if PERFORM THRU was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways. The following example (taken from Veerman & Verhoeven 2006) illustrates the problem: One might expect that the output of this program would be "1 2 3 4 3": After displaying "2", the second PERFORM causes "3" and "4" to be displayed, and then the first invocation continues on with "3". In traditional COBOL implementations, this is not the case. Rather, the first PERFORM statement sets the continuation address at the end of LABEL3 so that it will jump back to the call site inside LABEL1. The second PERFORM statement sets the return at the end of LABEL4 but does not modify the continuation address of LABEL3, expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of LABEL3, it jumps back to the outer PERFORM statement, and the program stops having printed just "1 2 3". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two PERFORM statements do not interfere with each other and the output is indeed "1 2 3 4 3". Therefore, the behavior in such cases is not only (perhaps) surprising, it is also not portable. A special consequence of this limitation is that PERFORM cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from Veerman & Verhoeven 2006): One might expect that the output is "1 2 3 END END END", and in fact that is what some COBOL compilers will produce. But other compilers, like IBM COBOL, will produce code that prints "1 2 3 END END END END ..." and so on, printing "END" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to DISPLAY 'END'. ==== Statements ==== COBOL 2014 has 47 statements (also called verbs), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section. ===== Control flow ===== COBOL's conditional statements are IF and EVALUATE. EVALUATE is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe: The PERFORM statement is used to define loops which are executed until a condition is true (not while true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). CALL and INVOKE call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available). CANCEL unloads subprograms from memory. GO TO causes the program to jump to a specified procedure. The GOBACK statement is a return statement and the STOP statement stops the program. The EXIT statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure. Exceptions are raised by a RAISE statement and caught with a handler, or declarative, defined in the DECLARATIVES portion of the procedure division. Declaratives are sections beginning with a USE statement which specify the errors to handle. Exceptions can be names or objects. RESUME is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the DECLARATIVES. Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected. ===== I/O ===== File I/O is handled by the self-describing OPEN, CLOSE, READ, and WRITE statements along with a further three: REWRITE, which updates a record; START, which selects subsequent records to access by finding a record with a certain key; and UNLOCK, which releases a lock on the last record accessed. User interaction is done using ACCEPT and DISPLAY. ===== Data manipulation ===== The following verbs manipulate data: INITIALIZE, which sets data items to their default values. MOVE, which assigns values to data items ; MOVE CORRESPONDING assigns corresponding like-named fields. SET, which has 15 formats: it can modify indices, assign object references and alter table capacities, among other functions. ADD, SUBTRACT, MULTIPLY, DIVIDE, and COMPUTE, which handle arithmetic (with COMPUTE assigning the result of a formula to a variable). ALLOCATE and FREE, which handle dynamic memory. VALIDATE, which validates and distributes data as specified in an item's description in the data division. STRING and UNSTRING, which concatenate and split strings, respectively. INSPECT, which tallies or replaces instances of specified substrings within a string. SEARCH, which searches a table for the first entry satisfying a condition. Files and tables are sorted using SORT and the MERGE verb merges and sorts files. The RELEASE verb provides records to sort and RETURN retrieves sorted records in order. ==== Scope termination ==== Some statements, such as IF and READ, may themselves contain statements. Such statements may be terminated in two ways: by a period (implicit termination), which terminates all unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement. Nested statements terminated with a period are a common source of bugs. For example, examine the following code: Here, the intent is to display y and z if condition x is true. However, z will be displayed whatever the value of x because the IF statement is terminated by an erroneous period after DISPLAY y. Another bug is a result of the dangling else problem, when two IF statements can associate with an ELSE. In the above fragment, the ELSE associates with the IF y statement instead of the IF x statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require ELSE NEXT SENTENCE to be placed after the inner IF. ==== Self-modifying code ==== The original (1959) COBOL specification supported the infamous ALTER X TO PROCEED TO Y statement, for which many compilers generated self-modifying code. X and Y are procedure labels, and the single GO TO statement in procedure X executed after such an ALTER statement means GO TO Y instead. Many compilers still support it, but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002. The ALTER statement was poorly regarded because it undermined "locality of context" and made a program's overall logic difficult to comprehend. As textbook author Daniel D. McCracken wrote in 1976, when "someone who has never seen the program before must become familiar with it as quickly as possible, sometimes under critical time pressure because the program has failed ... the sight of a GO TO statement in a paragraph by itself, signaling as it does the existence of an unknown number of ALTER statements at unknown locations throughout the program, strikes fear in the heart of the bravest programmer." === Hello, world === A "Hello, World!" program in COBOL: When the now famous "Hello, World!" program example in The C Programming Language was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, with an empty DATA DIVISION, was tested using Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley. In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters. After submitting the JCL, the MVS console displayed: Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output. The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL. == Reception == === Lack of structure === In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published in 1975 entitled "How do we tell truths that might hurt?", in which he was critical of COBOL and several other contemporary languages; remarking that "the use of COBOL cripples the mind". In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be "written by programmers that have never had the benefit of structured COBOL taught well", arguing that the issue was primarily one of training. One cause of spaghetti code was the GO TO statement. Attempts to remove GO TOs from COBOL code, however, resulted in convoluted programs and reduced code quality. GO TOs were largely replaced by the PERFORM statement and procedures, which promoted modular programming and gave easy access to powerful looping facilities. However, PERFORM could be used only with procedures so loop bodies were not located where they were used, making programs harder to understand. COBOL programs were infamous for being monolithic and lacking modularization. COBOL code could be modularized only through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify any data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake. Another complication stemmed from the ability to PERFORM THRU a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule. This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms. Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included. Nevertheless, much important legacy COBOL software uses unstructured code, which has become practically unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways. === Compatibility issues === COBOL was intended to be a highly portable, "common" language. However, by 2001, around 300 dialects had been created. One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 possible variants. COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard. As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs. === Verbose syntax === COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance. COBOL was also intended to be easy for programmers to learn and use, while still being readable to non-technical staff such as managers. The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code and the main changes in COBOL-85 were there to help ease maintenance. Jean Sammet, a short-range committee member, noted that "little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL" which she attributed to COBOL's verbose syntax. Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966). Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that "academics ... hate COBOL" and that computer science graduates "had 'hate COBOL' drilled into them". By the mid-1980s, there was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems. In 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java. Ten years later, a poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it. === Concerns about the design process === Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was "little control" over the development process and that it was "plagued by discontinuity of personnel and ... a lack of talent." Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence. COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped, COBOL 2002 was five years late, and COBOL 2014 was six years late. To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard. === Influences on other languages === COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the REDEFINES clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays. PICTURE data declarations were incorporated into PL/I, with minor changes. COBOL's COPY facility, although considered "primitive", influenced the development of include directives. The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems. Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular. == See also == Alphabetical list of programming languages BLIS/COBOL CODASYL Comparison of programming languages Generational list of programming languages § COBOL based List of compilers § COBOL compilers == Notes == == References == === Citations === === Sources === == External links == COBOLStandard.info at the Wayback Machine (archived 10 January 2017) ISO/IEC JTC1/SC22/WG4 - COBOL at the Wayback Machine (archived 22 August 2016) COBOL Language Standard (1991; COBOL-85 with Amendment 1), from The Open Group
https://en.wikipedia.org/wiki/COBOL
A computer virus is a type of malware that, when executed, replicates itself by modifying other computer programs and inserting its own code into those programs. If this replication succeeds, the affected areas are then said to be "infected" with a computer virus, a metaphor derived from biological viruses. Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. By contrast, a computer worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks. Virus writers use social engineering deceptions and exploit detailed knowledge of security vulnerabilities to initially infect systems and to spread the virus. Viruses use complex anti-detection/stealth strategies to evade antivirus software. Motives for creating viruses can include seeking profit (e.g., with ransomware), desire to send a political message, personal amusement, to demonstrate that a vulnerability exists in software, for sabotage and denial of service, or simply because they wish to explore cybersecurity issues, artificial life and evolutionary algorithms. As of 2013, computer viruses caused billions of dollars' worth of economic damage each year. In response, an industry of antivirus software has cropped up, selling or freely distributing virus protection to users of various operating systems. == History == The first academic work on the theory of self-replicating computer programs was done in 1949 by John von Neumann who gave lectures at the University of Illinois about the "Theory and Organization of Complicated Automata". The work of von Neumann was later published as the "Theory of self-reproducing automata". In his essay von Neumann described how a computer program could be designed to reproduce itself. Von Neumann's design for a self-reproducing computer program is considered the world's first computer virus, and he is considered to be the theoretical "father" of computer virology. In 1972, Veith Risak directly building on von Neumann's work on self-replication, published his article "Selbstreproduzierende Automaten mit minimaler Informationsübertragung" (Self-reproducing automata with minimal information exchange). The article describes a fully functional virus written in assembler programming language for a SIEMENS 4004/35 computer system. In 1980, Jürgen Kraus wrote his Diplom thesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs) at the University of Dortmund. In his work Kraus postulated that computer programs can behave in a way similar to biological viruses. The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early 1970s. Creeper was an experimental self-replicating program written by Bob Thomas at BBN Technologies in 1971. Creeper used the ARPANET to infect DEC PDP-10 computers running the TENEX operating system. Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'M THE CREEPER. CATCH ME IF YOU CAN!" was displayed. The Reaper program was created to delete Creeper. In 1982, a program called "Elk Cloner" was the first personal computer virus to appear "in the wild"—that is, outside the single computer or computer lab where it was created. Written in 1981 by Richard Skrenta, a ninth grader at Mount Lebanon High School near Pittsburgh, it attached itself to the Apple DOS 3.3 operating system and spread via floppy disk. On its 50th use the Elk Cloner virus would be activated, infecting the personal computer and displaying a short poem beginning "Elk Cloner: The program with a personality." In 1984, Fred Cohen from the University of Southern California wrote his paper "Computer Viruses – Theory and Experiments". It was the first paper to explicitly call a self-reproducing program a "virus", a term introduced by Cohen's mentor Leonard Adleman. In 1987, Cohen published a demonstration that there is no algorithm that can perfectly detect all possible viruses. Cohen's theoretical compression virus was an example of a virus which was not malicious software (malware), but was putatively benevolent (well-intentioned). However, antivirus professionals do not accept the concept of "benevolent viruses", as any desired function can be implemented without involving a virus (automatic compression, for instance, is available under Windows at the choice of the user). Any virus will by definition make unauthorised changes to a computer, which is undesirable even if no damage is done or intended. The first page of Dr Solomon's Virus Encyclopaedia explains the undesirability of viruses, even those that do nothing but reproduce. An article that describes "useful virus functionalities" was published by J. B. Gunn under the title "Use of virus functions to provide a virtual APL interpreter under user control" in 1984. The first IBM PC compatible virus in the "wild" was a boot sector virus dubbed (c)Brain, created in 1986 and was released in 1987 by Amjad Farooq Alvi and Basit Farooq Alvi in Lahore, Pakistan, reportedly to deter unauthorized copying of the software they had written. The first virus to specifically target Microsoft Windows, WinVir was discovered in April 1992, two years after the release of Windows 3.0. The virus did not contain any Windows API calls, instead relying on DOS interrupts. A few years later, in February 1996, Australian hackers from the virus-writing crew VLAD created the Bizatch virus (also known as "Boza" virus), which was the first known virus to specifically target Windows 95. This virus attacked the new portable executable (PE) files introduced in Windows 95. In late 1997 the encrypted, memory-resident stealth virus Win32.Cabanas was released—the first known virus that targeted Windows NT (it was also able to infect Windows 3.0 and Windows 9x hosts). Even home computers were affected by viruses. The first one to appear on the Amiga was a boot sector virus called SCA virus, which was detected in November 1987. By 1988, one sysop reportedly found that viruses infected 15% of the software available for download on his BBS. == Design == === Parts === A computer virus generally contains three parts: the infection mechanism, which finds and infects new files, the payload, which is the malicious code to execute, and the trigger, which determines when to activate the payload. Infection mechanism Also called the infection vector, this is how the virus spreads. Some viruses have a search routine, which locate and infect files on disk. Other viruses infect files as they are run, such as the Jerusalem DOS virus. Trigger Also known as a logic bomb, this is the part of the virus that determines the condition for which the payload is activated. This condition may be a particular date, time, presence of another program, size on disk exceeding a threshold, or opening a specific file. Payload The payload is the body of the virus that executes the malicious activity. Examples of malicious activities include damaging files, theft of confidential information or spying on the infected system. Payload activity is sometimes noticeable as it can cause the system to slow down or "freeze". Sometimes payloads are non-destructive and their main purpose is to spread a message to as many people as possible. This is called a virus hoax. === Phases === Virus phases is the life cycle of the computer virus, described by using an analogy to biology. This life cycle can be divided into four phases: Dormant phase The virus program is idle during this stage. The virus program has managed to access the target user's computer or software, but during this stage, the virus does not take any action. The virus will eventually be activated by the "trigger" which states which event will execute the virus. Not all viruses have this stage. Propagation phase The virus starts propagating, which is multiplying and replicating itself. The virus places a copy of itself into other programs or into certain system areas on the disk. The copy may not be identical to the propagating version; viruses often "morph" or change to evade detection by IT professionals and anti-virus software. Each infected program will now contain a clone of the virus, which will itself enter a propagation phase. Triggering phase A dormant virus moves into this phase when it is activated, and will now perform the function for which it was intended. The triggering phase can be caused by a variety of system events, including a count of the number of times that this copy of the virus has made copies of itself. The trigger may occur when an employee is terminated from their employment or after a set period of time has elapsed, in order to reduce suspicion. Execution phase This is the actual work of the virus, where the "payload" will be released. It can be destructive such as deleting files on disk, crashing the system, or corrupting files or relatively harmless such as popping up humorous or political messages on screen. == Targets and replication == Computer viruses infect a variety of different subsystems on their host computers and software. One manner of classifying viruses is to analyze whether they reside in binary executables (such as .EXE or .COM files), data files (such as Microsoft Word documents or PDF files), or in the boot sector of the host's hard drive (or some combination of all of these). A memory-resident virus (or simply "resident virus") installs itself as part of the operating system when executed, after which it remains in RAM from the time the computer is booted up to when it is shut down. Resident viruses overwrite interrupt handling code or other functions, and when the operating system attempts to access the target file or disk sector, the virus code intercepts the request and redirects the control flow to the replication module, infecting the target. In contrast, a non-memory-resident virus (or "non-resident virus"), when executed, scans the disk for targets, infects them, and then exits (i.e. it does not remain in memory after it is done executing). Many common applications, such as Microsoft Outlook and Microsoft Word, allow macro programs to be embedded in documents or emails, so that the programs may be run automatically when the document is opened. A macro virus (or "document virus") is a virus that is written in a macro language and embedded into these documents so that when users open the file, the virus code is executed, and can infect the user's computer. This is one of the reasons that it is dangerous to open unexpected or suspicious attachments in e-mails. While not opening attachments in e-mails from unknown persons or organizations can help to reduce the likelihood of contracting a virus, in some cases, the virus is designed so that the e-mail appears to be from a reputable organization (e.g., a major bank or credit card company). Boot sector viruses specifically target the boot sector and/or the Master Boot Record (MBR) of the host's hard disk drive, solid-state drive, or removable storage media (flash drives, floppy disks, etc.). The most common way of transmission of computer viruses in boot sector is physical media. When reading the VBR of the drive, the infected floppy disk or USB flash drive connected to the computer will transfer data, and then modify or replace the existing boot code. The next time a user tries to start the desktop, the virus will immediately load and run as part of the master boot record. Email viruses are viruses that intentionally, rather than accidentally, use the email system to spread. While virus infected files may be accidentally sent as email attachments, email viruses are aware of email system functions. They generally target a specific type of email system (Microsoft Outlook is the most commonly used), harvest email addresses from various sources, and may append copies of themselves to all email sent, or may generate email messages containing copies of themselves as attachments. == Detection == To avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the DOS platform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool antivirus software, however, especially those which maintain and date cyclic redundancy checks on file changes. Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example, the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because those files have many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file. Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them (for example, Conficker). A Virus may also hide its presence using a rootkit by not showing itself on the list of system processes or by disguising itself within a trusted process. In the 2010s, as computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access. In addition, only a small fraction of known viruses actually cause real incidents, primarily because many viruses remain below the theoretical epidemic threshold. === Read request intercepts === While some kinds of antivirus software employ various techniques to counter stealth mechanisms, once the infection occurs any recourse to "clean" the system is unreliable. In Microsoft Windows operating systems, the NTFS file system is proprietary. This leaves antivirus software little alternative but to send a "read" request to Windows files that handle such requests. Some viruses trick antivirus software by intercepting its requests to the operating system. A virus can hide by intercepting the request to read the infected file, handling the request itself, and returning an uninfected version of the file to the antivirus software. The interception can occur by code injection of the actual operating system files that would handle the read request. Thus, an antivirus software attempting to detect the virus will either not be permitted to read the infected file, or, the "read" request will be served with the uninfected version of the same file. The only reliable method to avoid "stealth" viruses is to boot from a medium that is known to be "clear". Security software can then be used to check the dormant operating system files. Most security software relies on virus signatures, or they employ heuristics. Security software may also use a database of file "hashes" for Windows OS files, so the security software can identify altered files, and request Windows installation media to replace them with authentic versions. In older versions of Windows, file cryptographic hash functions of Windows OS files stored in Windows—to allow file integrity/authenticity to be checked—could be overwritten so that the System File Checker would report that altered system files are authentic, so using file hashes to scan for altered files would not always guarantee finding an infection. === Self-modification === Most modern antivirus programs try to find virus-patterns inside ordinary programs by scanning them for so-called virus signatures. Different antivirus programs will employ different search methods when identifying viruses. If a virus scanner finds such a pattern in a file, it will perform other checks to make sure that it has found the virus, and not merely a coincidental sequence in an innocent file, before it notifies the user that the file is infected. The user can then delete, or (in some cases) "clean" or "heal" the infected file. Some viruses employ techniques that make detection by means of signatures difficult but probably not impossible. These viruses modify their code on each infection. That is, each infected file contains a different variant of the virus. One method of evading signature detection is to use simple encryption to encipher (encode) the body of the virus, leaving only the encryption module and a static cryptographic key in cleartext which does not change from one infection to the next. In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is entirely possible to decrypt the final virus, but this is probably not required, since self-modifying code is such a rarity that finding some may be reason enough for virus scanners to at least "flag" the file as suspicious. An old but compact way will be the use of arithmetic operation like addition or subtraction and the use of logical conditions such as XORing, where each byte in a virus is with a constant so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions. A simpler older approach did not use a key, where the encryption consisted only of operations with no parameters, like incrementing and decrementing, bitwise rotation, arithmetic negation, and logical NOT. Some viruses, called polymorphic viruses, will employ a means of encryption inside an executable in which the virus is encrypted under certain events, such as the virus scanner being disabled for updates or the computer being rebooted. This is called cryptovirology. Polymorphic code was the first technique that posed a serious threat to virus scanners. Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by a decryption module. In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A well-written polymorphic virus therefore has no parts which remain identical between infections, making it very difficult to detect directly using "signatures". Antivirus software can detect it by decrypting the viruses using an emulator, or by statistical pattern analysis of the encrypted virus body. To enable polymorphic code, the virus has to have a polymorphic engine (also called "mutating engine" or "mutation engine") somewhere in its encrypted body. See polymorphic code for technical detail on how such engines operate. Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such slow polymorphic code is that it makes it more difficult for antivirus professionals and investigators to obtain representative samples of the virus, because "bait" files that are infected in one run will typically contain identical or similar samples of the virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances of the virus may be able to avoid detection. To avoid being detected by emulation, some viruses rewrite themselves completely each time they are to infect new executables. Viruses that utilize this technique are said to be in metamorphic code. To enable metamorphism, a "metamorphic engine" is needed. A metamorphic virus is usually very large and complex. For example, W32/Simile consisted of over 14,000 lines of assembly language code, 90% of which is part of the metamorphic engine. == Effects == Damage is due to causing system failure, corrupting data, wasting computer resources, increasing maintenance costs or stealing personal information. Even though no antivirus software can uncover all computer viruses (especially new ones), computer security researchers are actively searching for new ways to enable antivirus solutions to more effectively detect emerging viruses, before they become widely distributed. A power virus is a computer program that executes specific machine code to reach the maximum CPU power dissipation (thermal energy output for the central processing units). Computer cooling apparatus are designed to dissipate power up to the thermal design power, rather than maximum power, and a power virus could cause the system to overheat if it does not have logic to stop the processor. This may cause permanent physical damage. Power viruses can be malicious, but are often suites of test software used for integration testing and thermal testing of computer components during the design phase of a product, or for product benchmarking. Stability test applications are similar programs which have the same effect as power viruses (high CPU usage) but stay under the user's control. They are used for testing CPUs, for example, when overclocking. Spinlock in a poorly written program may cause similar symptoms, if it lasts sufficiently long. Different micro-architectures typically require different machine code to hit their maximum power. Examples of such machine code do not appear to be distributed in CPU reference materials. == Infection vectors == As software is often designed with security features to prevent unauthorized use of system resources, many viruses must exploit and manipulate security bugs, which are security defects in a system or application software, to spread themselves and infect other computers. Software development strategies that produce large numbers of "bugs" will generally also produce potential exploitable "holes" or "entrances" for the virus. To replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs (see code injection). If a user attempts to launch an infected program, the virus' code may be executed simultaneously. In operating systems that use file extensions to determine program associations (such as Microsoft Windows), the extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type than it appears to the user. For example, an executable may be created and named "picture.png.exe", in which the user sees only "picture.png" and therefore assumes that this file is a digital image and most likely is safe, yet when opened, it runs the executable on the client machine. Viruses may be installed on removable media, such as flash drives. The drives may be left in a parking lot of a government building or other target, with the hopes that curious users will insert the drive into a computer. In a 2015 experiment, researchers at the University of Michigan found that 45–98 percent of users would plug in a flash drive of unknown origin. The vast majority of viruses target systems running Microsoft Windows. This is due to Microsoft's large market share of desktop computer users. The diversity of software systems on a network limits the destructive potential of viruses and malware. Open-source operating systems such as Linux allow users to choose from a variety of desktop environments, packaging tools, etc., which means that malicious code targeting any of these systems will only affect a subset of all users. Many Windows users are running the same set of applications, enabling viruses to rapidly spread among Microsoft Windows systems by targeting the same exploits on large numbers of hosts. While Linux and Unix in general have always natively prevented normal users from making changes to the operating system environment without permission, Windows users are generally not prevented from making these changes, meaning that viruses can easily gain control of the entire system on Windows hosts. This difference has continued partly due to the widespread use of administrator accounts in contemporary versions like Windows XP. In 1997, researchers created and released a virus for Linux—known as "Bliss". Bliss, however, requires that the user run it explicitly, and it can only infect programs that the user has the access to modify. Unlike Windows users, most Unix users do not log in as an administrator, or "root user", except to install or configure software; as a result, even if a user ran the virus, it could not harm their operating system. The Bliss virus never became widespread, and remains chiefly a research curiosity. Its creator later posted the source code to Usenet, allowing researchers to see how it worked. Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks. In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. Personal computers of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the "wild" for many years. Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in bulletin board system (BBS), modem use, and software sharing. Bulletin board–driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBSs. Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by other computers. Macro viruses have become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such as Microsoft Word and Microsoft Excel and spread throughout Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to Macintosh computers. Although most of these viruses did not have the ability to send infected email messages, those viruses which did take advantage of the Microsoft Outlook Component Object Model (COM) interface. Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents". A virus may also send a web address link as an instant message to all the contacts (e.g., friends and colleagues' e-mail addresses) stored on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating. Viruses that spread using cross-site scripting were first reported in 2002, and were academically demonstrated in 2005. There have been multiple instances of the cross-site scripting viruses in the "wild", exploiting websites such as MySpace (with the Samy worm) and Yahoo!. == Countermeasures == In 1989 The ADAPSO Software Industry Division published Dealing With Electronic Vandalism, in which they followed the risk of data loss by "the added risk of losing customer confidence." Many users install antivirus software that can detect and eliminate known viruses when the computer attempts to download or run the executable file (which may be distributed as an email attachment, or on USB flash drives, for example). Some antivirus software blocks known malicious websites that attempt to install malware. Antivirus software does not change the underlying capability of hosts to transmit viruses. Users must update their software regularly to patch security vulnerabilities ("holes"). Antivirus software also needs to be regularly updated to recognize the latest threats. This is because malicious hackers and other individuals are always creating new viruses. The German AV-TEST Institute publishes evaluations of antivirus software for Windows and Android. Examples of Microsoft Windows anti virus and anti-malware software include the optional Microsoft Security Essentials (for Windows XP, Vista and Windows 7) for real-time protection, the Windows Malicious Software Removal Tool (now included with Windows (Security) Updates on "Patch Tuesday", the second Tuesday of each month), and Windows Defender (an optional download in the case of Windows XP). Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Some such free programs are almost as good as commercial competitors. Common security vulnerabilities are assigned CVE IDs and listed in the US National Vulnerability Database. Secunia PSI is an example of software, free for personal use, that will check a PC for vulnerable out-of-date software, and attempt to update it. Ransomware and phishing scam alerts appear as press releases on the Internet Crime Complaint Center noticeboard. Ransomware is a virus that posts a message on the user's screen saying that the screen or system will remain locked or unusable until a ransom payment is made. Phishing is a deception in which the malicious individual pretends to be a friend, computer security expert, or other benevolent individual, with the goal of convincing the targeted individual to reveal passwords or other personal information. Other commonly used preventive measures include timely operating system updates, software updates, careful Internet browsing (avoiding shady websites), and installation of only trusted software. Certain browsers flag sites that have been reported to Google and that have been confirmed as hosting malware by Google. There are two common methods that an antivirus software application uses to detect viruses, as described in the antivirus software article. The first, and by far the most common method of virus detection is using a list of virus signature definitions. This works by examining the content of the computer's memory (its Random Access Memory (RAM), and boot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives, or USB flash drives), and comparing those files against a database of known virus "signatures". Virus signatures are just strings of code that are used to identify individual viruses; for each virus, the antivirus designer tries to choose a unique signature string that will not be found in a legitimate program. Different antivirus programs use different "signatures" to identify viruses. The disadvantage of this detection method is that users are only protected from viruses that are detected by signatures in their most recent virus definition update, and not protected from new viruses (see "zero-day attack"). A second method to find viruses is to use a heuristic algorithm based on common virus behaviors. This method can detect new viruses for which antivirus security firms have yet to define a "signature", but it also gives rise to more false positives than using signatures. False positives can be disruptive, especially in a commercial environment, because it may lead to a company instructing staff not to use the company computer system until IT services have checked the system for viruses. This can slow down productivity for regular workers. === Recovery strategies and methods === One may reduce the damage done by viruses by making regular backups of data (and the operating systems) on different media, that are either kept unconnected to the system (most of the time, as in a hard drive), read-only or not accessible for other reasons, such as using different file systems. This way, if data is lost through a virus, one can start again using the backup (which will hopefully be recent). If a backup session on optical media like CD and DVD is closed, it becomes read-only and can no longer be affected by a virus (so long as a virus or infected file was not copied onto the CD/DVD). Likewise, an operating system on a bootable CD can be used to start the computer if the installed operating systems become unusable. Backups on removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via removable flash drives. Many websites run by antivirus software companies provide free online virus scanning, with limited "cleaning" facilities (after all, the purpose of the websites is to sell antivirus products and services). Some websites—like Google subsidiary VirusTotal.com—allow users to upload one or more suspicious files to be scanned and checked by one or more antivirus programs in one operation. Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Microsoft offers an optional free antivirus utility called Microsoft Security Essentials, a Windows Malicious Software Removal Tool that is updated as part of the regular Windows update regime, and an older optional anti-malware (malware removal) tool Windows Defender that has been upgraded to an antivirus product in Windows 8. Some viruses disable System Restore and other important Windows tools such as Task Manager and CMD. An example of a virus that does this is CiaDoor. Many such viruses can be removed by rebooting the computer, entering Windows "safe mode" with networking, and then using system tools or Microsoft Safety Scanner. System Restore on Windows Me, Windows XP, Windows Vista and Windows 7 can restore the registry and critical system files to a previous checkpoint. Often a virus will cause a system to "hang" or "freeze", and a subsequent hard reboot will render a system restore point from the same day corrupted. Restore points from previous days should work, provided the virus is not designed to corrupt the restore files and does not exist in previous restore points. Microsoft's System File Checker (improved in Windows 7 and later) can be used to check for, and repair, corrupted system files. Restoring an earlier "clean" (virus-free) copy of the entire partition from a cloned disk, a disk image, or a backup copy is one solution—restoring an earlier backup disk "image" is relatively simple to do, usually removes any malware, and may be faster than "disinfecting" the computer—or reinstalling and reconfiguring the operating system and programs from scratch, as described below, then restoring user preferences. Reinstalling the operating system is another approach to virus removal. It may be possible to recover copies of essential user data by booting from a live CD, or connecting the hard drive to another computer and booting from the second computer's operating system, taking great care not to infect that computer by executing any infected programs on the original drive. The original hard drive can then be reformatted and the OS and all programs installed from original media. Once the system has been restored, precautions must be taken to avoid reinfection from any restored executable files. == Popular culture == The first known description of a self-reproducing program in fiction is in the 1970 short story The Scarred Man by Gregory Benford which describes a computer program called VIRUS which, when installed on a computer with telephone modem dialing capability, randomly dials phone numbers until it hits a modem that is answered by another computer, and then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE. His story was based on an actual computer virus written in FORTRAN that Benford had created and run on the lab computer in the 1960s, as a proof-of-concept, and which he told John Brunner about in 1970. The idea was explored further in two 1972 novels, When HARLIE Was One by David Gerrold and The Terminal Man by Michael Crichton, and became a major theme of the 1975 novel The Shockwave Rider by John Brunner. The 1973 Michael Crichton sci-fi film Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok. Alan Oppenheimer's character summarizes the problem by stating that "...there's a clear pattern here which suggests an analogy to an infectious disease process, spreading from one...area to the next." To which the replies are stated: "Perhaps there are superficial similarities to disease" and, "I must confess I find it difficult to believe in a disease of machinery." In 2016, Jussi Parikka announced the creation of The Malware Museum of Art: a collection of malware programs, usually viruses, distributed in the 1980s and 1990s on home computers. Malware Museum of Art is hosted at The Internet Archive and is curated by Mikko Hyppönen from Helsinki, Finland. The collection allows anyone with a computer to experience virus infection of decades ago with safety. == Other malware == The term "virus" is also misused by extension to refer to other types of malware. "Malware" encompasses computer viruses along with many other forms of malicious software, such as computer "worms", ransomware, spyware, adware, trojan horses, keyloggers, rootkits, bootkits, malicious Browser Helper Object (BHOs), and other malicious software. The majority of active malware threats are trojan horse programs or computer worms rather than computer viruses. The term computer virus, coined by Fred Cohen in 1985, is a misnomer. Viruses often perform some type of harmful activity on infected host computers, such as acquisition of hard disk space or central processing unit (CPU) time, accessing and stealing private information (e.g., credit card numbers, debit card numbers, phone numbers, names, email addresses, passwords, bank information, house addresses, etc.), corrupting data, displaying political, humorous or threatening messages on the user's screen, spamming their e-mail contacts, logging their keystrokes, or even rendering the computer useless. However, not all viruses carry a destructive "payload" and attempt to hide themselves—the defining characteristic of viruses is that they are self-replicating computer programs that modify other software without user consent by injecting themselves into the said programs, similar to a biological virus which replicates within living cells. == See also == == Notes == == References == == Further reading == == External links == 'Computer Viruses – Theory and Experiments' – The original paper by Fred Cohen, 1984 Hacking Away at the Counterculture by Andrew Ross (On hacking, 1990)
https://en.wikipedia.org/wiki/Computer_virus
TLC is an American multinational cable and satellite television network owned by the Networks division of Warner Bros. Discovery. First established in 1980 as The Learning Channel, it initially focused on educational and instructional programming. By the late 1990s, after an acquisition by Discovery, Inc. earlier in the decade, the network began to pivot towards reality television programming—predominantly focusing on programming involving lifestyles and personal stories—to the point that the previous name with "The Learning Channel" spelled out was phased out in favor of its initialism. As of November 2023, with its programming primarily dedicated to the nine-series 90 Day Fiancé universe, comprising 31% of the shows carried by the channel, TLC is available to approximately 71,000,000 pay television households in the United States—down from its 2011 peak of 100,000,000 households. == History == === 1972–1980: Early history as the Appalachian Educational Satellite Project === TLC's history traces to the 1972 formation of the Appalachian Educational Satellite Project (AESP), a distance education project formed by the Appalachian Regional Commission (ARC), in participation with the Education Satellite Communication Demonstration (ESCD), a partnership with the Department of Health, Education, and Welfare and NASA intended to transmit instructional, career and health programming via satellite to provide televised educational material to public schools and universities in the Appalachian region. ARC submitted a proposal to participate in the ESCD and use the ATS-6 communications satellite (launched into orbit in 1974) to disseminate "career education" programming to teachers at no cost; the consortium set up 15 earth station receiver sites across eight states in conjunction with local education service agencies. The ATS-6 temporarily ceased service to the Appalachian region after being re-orbited to India in September 1975; by the time the satellite reoriented to the United States the following year, the number of earth receivers used to transmit AESP content increased to 45 sites in Pennsylvania, Kentucky, Maryland, Virginia, West Virginia, Tennessee, Alabama, Georgia, North Carolina and South Carolina (some of which also acted as relays to local television stations in the region). All programming offered through the project was accepted for academic credit at 12 universities in the region. In October 1978, NASA disclosed the ATS-6 would suspend transmissions for 12 months due to transmission problems with the satellite. As a result, ARC decided to purchase transponder time on the commercial Satcom I communications satellite, in order to continue its distance education offerings. === 1980–1998: From ACSN to The Learning Channel, "A place for learning minds" === The non-profit Appalachian Community Service Network (ACSN) was incorporated in April 1980, maintaining a board of directors appointed by the Appalachian Regional Commission. The ACSN television service launched in October 1980 as ACSN – The Learning Channel; unlike the closed-circuit AESP, the network distributed its programming available directly to cable systems for home viewing. Its programming also expanded to include "informational" content. (NASA immediately launched NASA TV as the ACSN's internal replacement.) By 1982, ACSN claimed that it "achieved the fastest rate of growth of all basic cable programming services", with availability on around 70 cable affiliates reaching 1.5 million subscribers; by this point, 70 universities granted academic credit for telecourses carried on the network. On January 1, 1984, the network shortened its name to The Learning Channel. The channel mostly featured documentary content pertaining to nature, science, history, current events, medicine, technology, cooking, home improvement, and other information-based topics. A notable example of such programming was Sew, What's New?, a fashion design show presented by George W. Trippon. These were more focused, more technical, and of a more academic nature than the content that was being broadcast at the time on its eventual rival, The Discovery Channel, which launched in 1985. TLC was geared toward an inquisitive and narrow audience during this time, and had modest ratings. An exception to this viewership commonality was Captain's Log (produced and hosted by Mark Graves, also known as Captain Mark Gray), a weekly primetime boating safety series that aired from 1987 to 1990; the program often achieved between a 4.5 to 6 rating share and was the highest compensated series in the history of TLC with over 30 times the compensation of any other series on the network. In 1986, Infotech, Inc.—then-owner of the Financial News Network (FNN)—acquired a 51% interest in The Learning Channel for $3 million; the American Community Service Network retained a 31.5% share of the network, with the remaining 17.5% owned by network management. On February 15, 1991, The Discovery Channel, Inc.—owners of the namesake cable channel—announced it had reached an agreement to acquire The Learning Channel from ACSN and Infotech (the latter of which was in the process of a bankruptcy-led asset liquidation to repay creditors, subsequently resulting in the sale of the Financial News Network to a joint venture of NBC and Cablevision that integrated the network with rival financial news channel CNBC) for $12.75 million (equivalent to $29.43 million today). Under Discovery, The Learning Channel continued to focus primarily on instructional and educational programming for much of the 1990s; however, in what preceded its later expansion of such content, it also began to include shows less focused on education and geared more toward attracting popular consumption and mass marketing. In 1992, the network's name was shortened to "TLC", although the full name remained in use on alternating basis. TLC continued to offer educational programs such as Paleoworld (a show about prehistoric creatures), though more and more of its programming began to be devoted to niche audiences for shows regarding subjects like home improvement (Hometime and Home Savvy were two of the first), arts and crafts, crime programs such as The New Detectives, medical programming (particularly reality-based shows following real patients through the process of operations), and other shows that appealed to daytime audiences, particularly housewives. This was to be indicative of a major change in programming content and target audience over the next few years. === 1998–2008: "Life Unscripted", new direction === TLC began to explore new avenues starting in the late 1990s, deemphasizing educational material in favor of entertainment. Ready Set Learn!, the network's children's program block, was slowly reduced through the years as the network deliberately redirected viewers towards the full-day lineup of children's programming on Discovery Kids. The block was dropped completely in late 2008, and Cable in the Classroom programming, meant for recording by teachers, was discontinued in 2014. In 1998, the channel began to distance itself from its original name "The Learning Channel", and instead began to advertise itself only as "TLC". During this period, there was a huge shift in content, with most new programming being geared towards reality-drama and interior design shows. The huge success of shows like Trading Spaces, Junkyard Wars, A Wedding Story, and A Baby Story exemplified this new shift in programming towards more mass-appeal shows. This came at a time when Discovery itself was overhauling much, and in some cases competition series of its own programming, introducing shows like American Chopper (which Discovery moved to TLC for a time). Much of the older, more educationally focused programming could still be found dispersed amongst other channels owned by Discovery Communications. On March 27, 2006, the network launched a new look and promotional campaign, dropping the "Life Unscripted" tag and introducing a new theme, "Live and Learn", trying to turn around the network's reliance on decorating shows and reality programming. As part of the new campaign, the channel's original name, "The Learning Channel", returned to occasional usage in promotions. The new theme also played on "life lessons", which featured heavily in the network's advertising and promotional clips. This campaign used humor to appeal to a target audience in their 30s. In 2007, TLC premiered Say Yes to the Dress, a reality series following clients of Kleinfeld Bridal in Manhattan. === 2008–present: Further focus on personal stories === In early March 2008, TLC launched a new imaging campaign, "Life Surprises". This new slogan came as TLC began to shift even more to personal stories, and away from the once-dominating home improvement shows. Programs focused on family life became the core of the channel. Jon & Kate Plus 8, which by 2008 was the highest-rated program on TLC, and Little People, Big World were joined by 17 Kids and Counting—a show which followed the lives of the Duggar family (and was in turn retitled 18 Kids and Counting, and then 19 Kids and Counting, as the family expanded), and Table for 12 in 2008 and 2009 respectively. The series Toddlers & Tiaras also debuted in 2008, and proved popular enough to spawn a spin-off in 2012, Here Comes Honey Boo Boo, focusing on the family life of recurring contestant Alana "Honey Boo Boo" Thompson. Also premiering on TLC in 2009 was Cake Boss, which focuses on the head baker at Carlo's Bakery and his staff, who mostly consist of his family. In July 2014, TLC introduced a new slogan and promotional campaign, "Everyone Needs a Little TLC", which continued to build upon the network's current focus on personal stories and family life. In 2014, Here Comes Honey Boo Boo was canceled after it was reported that Alana's mother had been dating a registered sex offender. In 2015, 19 Kids and Counting was canceled by TLC after the Duggars' eldest son, Josh Duggar, admitted to acts of molestation he had committed against minors while he was a teenager. It was subsequently succeeded by a spin-off series, Counting On, which followed the adult lives of Duggar family members; the series was canceled in June 2021 after Josh Duggar was arrested on child pornography charges (for which he was later convicted). In 2017, home design programming began to return to the network with the premiere of Nate & Jeremiah By Design; the series was renewed for a second season. In April 2018, TLC premiered a revival of Trading Spaces (which accompanied the season 2 premiere of Nate & Jeremiah By Design); the season premiere and an accompanying reunion special were seen by 2.8 million viewers, marking the network's highest-rated Saturday primetime program since 2010. In March 2018, Discovery Communications acquired Scripps Networks Interactive, and was renamed Discovery, Inc. TLC president Nancy Daniels left the network to become the chief brand officer of Discovery's factual networks, to replace the outgoing Rich Ross. She was replaced by Scripps Networks' chief programmer Kathleen Finch as chief brand officer of Discovery's lifestyle networks, overseeing TLC and the six networks formerly owned by SNI (such as HGTV and Food Network), among others. In 2019, HGTV and TLC premiered a co-commissioned revival of another former TLC series, While You Were Out; new episodes premiered on both networks simultaneously, with HGTV airing an alternate cut of the episode focusing more on the renovation process. == Programming == == High-definition feed == A high definition simulcast of TLC was launched on September 1, 2007. It is currently available on many subscription-television systems in the United States and Canada. == International == === Middle East and North Africa === OSN—a paid platform in the Middle East and North Africa—launched TLC HD and broadcast it with the Discovery Network, using the same form as the American TLC channel and adding new exclusive Arabic-English programs from its production as "Nidaa". It is broadcast in Israel, by satellite provider yes. === The Americas === ==== Canada ==== TLC's American feed is available in Canada on most cable and satellite providers, as it is authorized for carriage as a foreign cable television service by the Canadian Radio-television and Telecommunications Commission; save for a few differences it features the same programming schedule as that seen in the United States. ==== Latin America ==== The Latin American TLC HD, was launched on December 1, 2009, exclusively in high-definition, in the same style as the American channel (most of TLC's programming is available in standard-definition on Discovery Home & Health). On November 1, 2011, the Latin American version of Discovery Travel & Living was relaunched as TLC: Travel & Living Channel, which now also has a dedicated feed for Brazil. === Europe === ==== United Kingdom and Ireland ==== An English-language version of the channel was originally launched in 1994 across Europe and was subsequently renamed Discovery Home and Leisure and later Discovery Real Time as part of Discovery's slate of themed channels. TLC relaunched in the UK and Ireland on April 30, 2013. ==== Romania ==== TLC Romania was launched on January 20, 2011, replacing the European version of Discovery Travel & Living in this country. On August 2, 2022, it launched its local Romanian feed and audio track replacing the international feed. ==== Bulgaria ==== In early 2013 the channel launched in Bulgaria. ==== Finland ==== In November 2016, TLC began to appear free, before that TLC was a payment channel in Finland. ==== France ==== TLC's French channel launched on 26 February 2024 as a replacement for Discovery Science. ==== Portugal ==== In November 2011, TLC Portugal debuted on ZON TV (now NOS TV) distributor and satellite services and after on MEO TV. ==== Greece ==== On October 3, 2011, TLC Greece debuted on the Conn-x TV IPTV and OTE TV satellite services. ==== Germany ==== TLC Germany launched on April 10, 2014, on cable, IPTV and satellite services in both HD and SD. ==== Hungary ==== The Channel's Hungarian version was launched on April 30, 2012, as TLC Hungary, replacing the European version of Discovery Travel & Living in on Central Europe's best country (The SD version available in Sat TV, Cable and IPTV, and almost HD version available in Hungarian BIX (IPTV) services). ==== Norway ==== A Norwegian version of the channel was launched on March 4, 2010, as TLC Norway, replacing the European version of Discovery Travel & Living in Norway. ==== Poland ==== On October 1, 2010, the Polish version of Discovery Travel & Living was relaunched as TLC Poland, replacing the European version of Discovery Travel & Living in Poland. ==== The Balkans ==== TLC Balkans was also launched on October 1, 2010, replacing the European version of the "Travel & Living Channel" in Slovenia, Croatia, Bosnia and Herzegovina, Serbia, Montenegro and North Macedonia. TLC Balkans' playout is from Belgrade, Serbia. ==== Netherlands/Flanders (Belgium) ==== On July 4, 2011, a Dutch version was launched, time sharing with Animal Planet's standard definition feed. Animal Planet remained a 24-hour service for high-definition viewers. TLC became a 24-hour channel on January 8, 2013. It is also available in HD. ==== Switzerland ==== On June 3, 2014, the Swiss cable provider UPC Cablecom launched TLC in Switzerland. ==== Turkey ==== On November 6, 2015, TLC Turkey began broadcasting replacing CNBC-e channel. === Asia === On September 1, 2010, the Asia Pacific versions of Discovery Travel & Living were relaunched as TLC, with the acronym standing for "Travel and Living Channel". ==== India ==== An Indian version was launched in 2006 under the jurisdiction of Discovery Channel. It was relaunched as TLC on September 1, 2010. ==== South Korea ==== A South Korean version was launched on December 4, 2013, under the Discovery Communications and CMB (Central Media Broadcasting Korea). The channel was replaced by EXF (Extreme Fun TV) on May 1, 2016. === Oceania === A New Zealand version was launched in the 2015 On Sky Television in New Zealand. === Sub-Saharan Africa and South Africa === The network airs throughout the region on DStv, and launched on September 1, 2011. == References == == External links == Official website TLC Norway official website TLC Poland official website TLC Russian official website
https://en.wikipedia.org/wiki/TLC_(TV_network)
Hope is a programming language based on functional programming developed in the 1970s at the University of Edinburgh. It predates Miranda and Haskell and is contemporaneous with ML, also developed at the University. Hope was derived from NPL, a simple functional language developed by Rod Burstall and John Darlington in their work on program transformation. NPL and Hope are notable for being the first languages with call-by-pattern evaluation and algebraic data types. Hope was named for Sir Thomas Hope (c. 1681–1771), a Scottish agriculture reformer, after whom Hope Park Square in Edinburgh, the location of the artificial intelligence department at the time of the development of Hope, was also named. The first implementation of Hope used strict evaluation, but there have since been lazy evaluation versions and strict versions with lazy constructors. A successor language Hope+, developed jointly between Imperial College and International Computers Limited, added annotations to dictate either strict or lazy evaluation. == Language details == A factorial program in Hope is: dec fact : num -> num; --- fact 0 <= 1; --- fact n <= n*fact(n-1); Changing the order of clauses does not change the meaning of the program, because Hope's pattern matching always favors more specific patterns over less specific ones. Explicit declarations of data types in Hope are required; there is no type inference algorithm. Hope provides two built-in data structures: tuples and lists. == Implementations == Roger Bailey's Hope tutorial in the August 1985 issue of Byte references an interpreter for IBM PC DOS 2.0. British Telecom embarked on a project with Imperial College London to implement a version of Hope. The first release was coded by Thanos Vassilakis in 1986. Further releases were coded by Mark Tasng of British Telecom. == References == == External links == Hope Interpreter for Windows Entry in the online Dictionary of Programming Languages
https://en.wikipedia.org/wiki/Hope_(programming_language)
In computer science, an associative array, key-value store, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with finite domain. It supports 'lookup', 'remove', and 'insert' operations. The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays. The two major solutions to the dictionary problem are hash tables and search trees. It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures. Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays. Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern. The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors. == Operations == In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association. The operations that are usually defined for an associative array are: Insert or put add a new ( k e y , v a l u e ) {\displaystyle (key,value)} pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value. Remove or delete remove a ( k e y , v a l u e ) {\displaystyle (key,value)} pair from the collection, unmapping a given key from its value. The argument to this operation is the key. Lookup, find, or get find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an exception, while others return a default value (such as zero, null, or a specific value passed to the constructor). Associative arrays may also include other operations such as determining the number of mappings or constructing an iterator to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined. A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value. === Properties === The operations of the associative array should satisfy various properties: lookup(k, insert(j, v, D)) = if k == j then v else lookup(k, D) lookup(k, new()) = fail, where fail is an exception or default value remove(k, insert(j, v, D)) = if k == j then remove(k, D) else insert(j, v, remove(k, D)) remove(k, new()) = new() where k and j are keys, v is a value, D is an associative array, and new() creates a new, empty associative array. === Example === Suppose that the set of loans made by a library is represented in a data structure. Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be: A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state: == Implementation == For dictionaries with very few mappings, it may make sense to implement the dictionary using an association list, which is a linked list of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small. Another very simple implementation technique, usable when the keys are restricted to a narrow range, is direct addressing into an array: the value for a given key k is stored at the array cell A[k], or if there is no mapping for k then the cell stores a special sentinel value that indicates the lack of a mapping. This technique is simple and fast, with each dictionary operation taking constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small. The two major approaches for implementing dictionaries are a hash table or a search tree. === Hash table implementations === The most frequently used general-purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and usually outperform alternative implementations. Hash tables must be able to handle collisions: the mapping by the hash function of two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing. In separate chaining, the array does not store the value itself but stores a pointer to another container, usually an association list, that stores all the values matching the hash. By contrast, in open addressing, if a hash collision is found, the table seeks an empty spot in an array to store the value in a deterministic manner, usually by looking at the next immediate position in the array. Open addressing has a lower cache miss ratio than separate chaining when the table is mostly empty. However, as the table becomes filled with more elements, open addressing's performance degrades exponentially. Additionally, separate chaining uses less memory in most cases, unless the entries are very small (less than four times the size of a pointer). === Tree implementations === ==== Self-balancing binary search trees ==== Another common approach is to implement an associative array with a self-balancing binary search tree, such as an AVL tree or a red–black tree. Compared to hash tables, these structures have both strengths and weaknesses. The worst-case performance of self-balancing binary search trees is significantly better than that of a hash table, with a time complexity in big O notation of O(log n). This is in contrast to hash tables, whose worst-case performance involves all elements sharing a single bucket, resulting in O(n) time complexity. In addition, and like all binary search trees, self-balancing binary search trees keep their elements in order. Thus, traversing its elements follows a least-to-greatest pattern, whereas traversing a hash table can result in elements being in seemingly random order. Because they are in order, tree-based maps can also satisfy range queries (find all values between two bounds) whereas a hashmap can only find exact values. However, hash tables have a much better average-case time complexity than self-balancing binary search trees of O(1), and their worst-case performance is highly unlikely when a good hash function is used. A self-balancing binary search tree can be used to implement the buckets for a hash table that uses separate chaining. This allows for average-case constant lookup, but assures a worst-case performance of O(log n). However, this introduces extra complexity into the implementation and may cause even worse performance for smaller hash tables, where the time spent inserting into and balancing the tree is greater than the time needed to perform a linear search on all elements of a linked list or similar data structure. ==== Other trees ==== Associative arrays may also be stored in unbalanced binary search trees or in data structures specialized to a particular type of keys such as radix trees, tries, Judy arrays, or van Emde Boas trees, though the relative performance of these implementations varies. For instance, Judy trees have been found to perform less efficiently than hash tables, while carefully selected hash tables generally perform more efficiently than adaptive radix trees, with potentially greater restrictions on the data types they can handle. The advantages of these alternative structures come from their ability to handle additional associative array operations, such as finding the mapping whose key is the closest to a queried key when the query is absent in the set of mappings. === Comparison === == Ordered dictionary == The basic definition of a dictionary does not mandate an order. To guarantee a fixed order of enumeration, ordered versions of the associative array are often used. There are two senses of an ordered dictionary: The order of enumeration is always deterministic for a given set of keys by sorting. This is the case for tree-based implementations, one representative being the <map> container of C++. The order of enumeration is key-independent and is instead based on the order of insertion. This is the case for the "ordered dictionary" in .NET Framework, the LinkedHashMap of Java and Python. The latter is more common. Such ordered dictionaries can be implemented using an association list, by overlaying a doubly linked list on top of a normal dictionary, or by moving the actual data out of the sparse (unordered) array and into a dense insertion-ordered one. == Language support == Associative arrays can be implemented in any programming language as a package and many language systems provide them as part of their standard library. In some languages, they are not only built into the standard system, but have special syntax, often using array-like subscripting. Built-in syntactic support for associative arrays was introduced in 1969 by SNOBOL4, under the name "table". TMG offered tables with string keys and integer values. MUMPS made multi-dimensional associative arrays, optionally persistent, its key data structure. SETL supported them as one possible implementation of sets and maps. Most modern scripting languages, starting with AWK and including Rexx, Perl, PHP, Tcl, JavaScript, Maple, Python, Ruby, Wolfram Language, Go, and Lua, support associative arrays as a primary container type. In many more languages, they are available as library functions without special syntax. In Smalltalk, Objective-C, .NET, Python, REALbasic, Swift, VBA and Delphi they are called dictionaries; in Perl, Ruby and Seed7 they are called hashes; in C++, C#, Java, Go, Clojure, Scala, OCaml, Haskell they are called maps (see map (C++), unordered_map (C++), and Map); in Common Lisp and Windows PowerShell, they are called hash tables (since both typically use this implementation); in Maple and Lua, they are called tables. In PHP and R, all arrays can be associative, except that the keys are limited to integers and strings. In JavaScript (see also JSON), all objects behave as associative arrays with string-valued keys, while the Map and WeakMap types take arbitrary objects as keys. In Lua, they are used as the primitive building block for all data structures. In Visual FoxPro, they are called Collections. The D language also supports associative arrays. == Permanent storage == Many programs using associative arrays will need to store that data in a more permanent form, such as a computer file. A common solution to this problem is a generalized concept known as archiving or serialization, which produces a text or binary representation of the original objects that can be written directly to a file. This is most commonly implemented in the underlying object model, like .Net or Cocoa, which includes standard functions that convert the internal data into text. The program can create a complete text representation of any group of objects by calling these methods, which are almost always already implemented in the base associative array class. For programs that use very large data sets, this sort of individual file storage is not appropriate, and a database management system (DB) is required. Some DB systems natively store associative arrays by serializing the data and then storing that serialized data and the key. Individual arrays can then be loaded or saved from the database using the key to refer to them. These key–value stores have been used for many years and have a history as long as that of the more common relational database (RDBs), but a lack of standardization, among other reasons, limited their use to certain niche roles. RDBs were used for these roles in most cases, although saving objects to a RDB can be complicated, a problem known as object-relational impedance mismatch. After approximately 2010, the need for high-performance databases suitable for cloud computing and more closely matching the internal structure of the programs using them led to a renaissance in the key–value store market. These systems can store and retrieve associative arrays in a native fashion, which can greatly improve performance in common web-related workflows. == See also == Tuple Function (mathematics) == References == == External links == NIST's Dictionary of Algorithms and Data Structures: Associative Array
https://en.wikipedia.org/wiki/Associative_array
Advertiser-funded programming (AFP) is a recent term applied to a break away from the modern model of television funding in place since the early 1960s. Since that time, programmes have normally been funded by a broadcaster and they re-couped the money through selling advertising space around the content. This has worked fine for decades, but new technological advances have forced broadcasters and advertisers to re-think their relationship. The concept is as old as television itself; the term soap opera is derived from the fact that the original soap operas were in fact funded and produced by soap companies such as Procter & Gamble. Shows such as the Texaco Star Theater, which were among the earliest television programs, included the practice. It was not until the quiz show scandals of the late 1950s, when particularly aggressive advertisers began rigging game shows to produce a more entertaining product, that the practice fell on the wayside. By the time television became a worldwide phenomenon in the late 1950s and early 1960s, the original model had mostly been eschewed in favor of the modern model, which separates programming and advertising. (The fact that many of the early television broadcasters outside the United States were public broadcasters that restricted the use of advertising may have been a contributing factor to this.) With the advent of digital recording devices, also known as personal video recorders (PVR's), viewers can choose to record episodes or entire series of their favourite shows and watch them in their own time. Not only does this skew the idea of 'primetime', (advertisers being charged a premium for buying spots around the most popular viewing times), but it means viewers can skip the ads altogether. Advertiser-funded programming, largely a neologism, is a solution to this change and means the advertiser pays to integrate their message in the TV programme itself, rather than just buying advertising space around it. It includes product placement, sponsorship, naming rights and more recently the actual creation of whole shows from scratch. Many of these projects are enabled by a content partnership where the programming is co-funded by multiple stakeholders. Some recent examples of AFP: The Krypton Factor, in partnership with The Sage Group on ITV Beat: Life on the Street on ITV, in partnership with the Home Office Ford and Toyota in 24 Crest toothpaste in The Apprentice American Express in The Restaurant Findmypast.co.uk sponsored the genealogy TV series 'Find My Past' on the Yesterday channel in October 2011. Most sports organizations heavily restrict the use of advertiser funded programming, particularly amateur competitions such as the Olympic Games and the FIFA World Cup, both of which ban the practice as ambush marketing. Other sports have embraced the practice as an additional form of revenue, both for the leagues and the networks. Naming rights have been sold for bowl games, tournaments, television presentations, halftime shows, stadiums and arenas, with the practice of selling team names more common outside North America, while product placements and advertisements can be seen on the fields, on sideboards surrounding them, or as on-screen graphics without interrupting a telecast. Advertiser funded programming techniques give sports broadcasters a third channel of revenue, in addition to retransmission consent fees and traditional advertising, allowing stations such as ESPN to pay high rights fees and still make significant amounts of money. == References ==
https://en.wikipedia.org/wiki/Advertiser-funded_programming
Metaprogramming is a computer programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyse, or transform other programs, and even modify itself, while running. In some cases, this allows programmers to minimize the number of lines of code to express a solution, in turn reducing development time. It also allows programs more flexibility to efficiently handle new situations with no recompiling. Metaprogramming can be used to move computations from runtime to compile time, to generate code using compile time computations, and to enable self-modifying code. The ability of a programming language to be its own metalanguage allows reflective programming, and is termed reflection. Reflection is a valuable language feature to facilitate metaprogramming. Metaprogramming was popular in the 1970s and 1980s using list processing languages such as Lisp. Lisp machine hardware gained some notice in the 1980s, and enabled applications that could process code. They were often used for artificial intelligence applications. == Approaches == Metaprogramming enables developers to write programs and develop code that falls under the generic programming paradigm. Having the programming language itself as a first-class data type (as in Lisp, Prolog, SNOBOL, or Rebol) is also very useful; this is known as homoiconicity. Generic programming invokes a metaprogramming facility within a language by allowing one to write code without the concern of specifying data types since they can be supplied as parameters when used. Metaprogramming usually works in one of three ways. The first approach is to expose the internals of the runtime system (engine) to the programming code through application programming interfaces (APIs) like that for the .NET Common Intermediate Language (CIL) emitter. The second approach is dynamic execution of expressions that contain programming commands, often composed from strings, but can also be from other methods using arguments or context, like JavaScript. Thus, "programs can write programs." Although both approaches can be used in the same language, most languages tend to lean toward one or the other. The third approach is to step outside the language entirely. General purpose program transformation systems such as compilers, which accept language descriptions and carry out arbitrary transformations on those languages, are direct implementations of general metaprogramming. This allows metaprogramming to be applied to virtually any target language without regard to whether that target language has any metaprogramming abilities of its own. One can see this at work with Scheme and how it allows tackling some limits faced in C by using constructs that are part of the Scheme language to extend C. Lisp is probably the quintessential language with metaprogramming facilities, both because of its historical precedence and because of the simplicity and power of its metaprogramming. In Lisp metaprogramming, the unquote operator (typically a comma) introduces code that is evaluated at program definition time rather than at run time. The metaprogramming language is thus identical to the host programming language, and existing Lisp routines can be directly reused for metaprogramming if desired. This approach has been implemented in other languages by incorporating an interpreter in the program, which works directly with the program's data. There are implementations of this kind for some common high-level languages, such as RemObjects’ Pascal Script for Object Pascal. == Usages == === Code generation === A simple example of a metaprogram is this POSIX Shell script, which is an example of generative programming: This script (or program) generates a new 993-line program that prints out the numbers 1–992. This is only an illustration of how to use code to write more code; it is not the most efficient way to print out a list of numbers. Nonetheless, a programmer can write and execute this metaprogram in less than a minute, and will have generated over 1000 lines of code in that amount of time. A quine is a special kind of metaprogram that produces its own source code as its output. Quines are generally of recreational or theoretical interest only. Not all metaprogramming involves generative programming. If programs are modifiable at runtime, or if incremental compiling is available (such as in C#, Forth, Frink, Groovy, JavaScript, Lisp, Elixir, Lua, Nim, Perl, PHP, Python, Rebol, Ruby, Rust, R, SAS, Smalltalk, and Tcl), then techniques can be used to perform metaprogramming without generating source code. One style of generative approach is to employ domain-specific languages (DSLs). A fairly common example of using DSLs involves generative metaprogramming: lex and yacc, two tools used to generate lexical analysers and parsers, let the user describe the language using regular expressions and context-free grammars, and embed the complex algorithms required to efficiently parse the language. === Code instrumentation === One usage of metaprogramming is to instrument programs in order to do dynamic program analysis. == Challenges == Some argue that there is a sharp learning curve to make complete use of metaprogramming features. Since metaprogramming gives more flexibility and configurability at runtime, misuse or incorrect use of metaprogramming can result in unwarranted and unexpected errors that can be extremely difficult to debug to an average developer. It can introduce risks in the system and make it more vulnerable if not used with care. Some of the common problems, which can occur due to wrong use of metaprogramming are inability of the compiler to identify missing configuration parameters, invalid or incorrect data can result in unknown exception or different results. Due to this, some believe that only high-skilled developers should work on developing features which exercise metaprogramming in a language or platform and average developers must learn how to use these features as part of convention. == Uses in programming languages == === Macro systems === Lisp, most dialects Clojure Common Lisp Racket Scheme hygienic macros MacroML Template Haskell Scala Nim Rust Haxe Julia Elixir === Macro assemblers === The IBM/360 and derivatives had powerful macro assembler facilities that were often used to generate complete assembly language programs or sections of programs (for different operating systems for instance). Macros provided with CICS transaction processing system had assembler macros that generated COBOL statements as a pre-processing step. Other assemblers, such as MASM, also support macros. === Metaclasses === Metaclasses are provided by the following programming languages: Common Lisp Python NIL Groovy Ruby Smalltalk Lua === Template metaprogramming === C "X Macros" C++ Templates D Common Lisp, Scheme and most Lisp dialects by using the quasiquote ("backquote") operator. Nim === Staged metaprogramming === MetaML MetaOCaml Scala natively or using the Lightweight Modular Staging Framework Terra === Dependent types === Use of dependent types allows proving that generated code is never invalid. However, this approach is leading-edge and rarely found outside of research programming languages. == Implementations == The list of notable metaprogramming systems is maintained at List of program transformation systems. == See also == == References == == External links == c2.com Wiki: Metaprogramming article Meta Programming on the Program Transformation Wiki Code generation Vs Metaprogramming "Solenoid": The first metaprogramming framework for eXist-db
https://en.wikipedia.org/wiki/Metaprogramming
CPL (Combined Programming Language) is a multi-paradigm programming language developed in the early 1960s. It is an early ancestor of the C language via the BCPL and B languages. == Design == CPL was developed initially at the Mathematical Laboratory at the University of Cambridge as the "Cambridge Programming Language" and later published jointly between Cambridge and the University of London Computer Unit as the "Combined Programming Language" (CPL was also nicknamed by some as "Cambridge Plus London" or "Christopher's Programming Language"). Christopher Strachey, David Barron and others were involved in its development. The first paper describing it was published in 1963, while it was being implemented on the Titan Computer at Cambridge and the Atlas Computer at London. It was heavily influenced by ALGOL 60, but instead of being extremely small, elegant and simple, CPL was intended for a wider application area than scientific calculations and was therefore much more complex and not as elegant as ALGOL 60. CPL was a big language for its time. CPL attempted to go beyond ALGOL to include industrial process control, business data processing and possibly some early command line games. CPL was intended to allow low-level programming and high level abstractions using the same language. However, CPL was only implemented very slowly. The first CPL compiler was probably written about 1970, but the language never gained much popularity and seems to have disappeared without trace sometime in the 1970s. BCPL (for "Basic CPL", although originally "Bootstrap CPL") was a much simpler language based on CPL intended primarily as a systems programming language, particularly for writing compilers; it was first implemented in 1967, prior to CPL's first implementation. BCPL then led, via B, to the popular and influential C programming language. == Example == The function MAX as formulated by Peter Norvig: Max(Items, ValueFunction) = value of § (Best, BestVal) = (NIL, -∞) while Items do § (Item, Val) = (Head(Items), ValueFunction(Head(Items))) if Val > BestVal then (Best, BestVal) := (Item, Val) Items := Rest(Items) ̸§ result is Best ̸§ The closing section block symbol used here (̸§) is an approximation of the original symbol, in which the cross stroke is vertical. This is available in Unicode as §⃒ but does not display correctly on many systems. == Implementations == It is thought that CPL was never fully implemented in the 1960s, existing as a theoretical construct with some research work on partial implementations. Peter Norvig has written (for Yapps, a Python compiler-compiler) a simple CPL to Python translator for modern machines. == See also == Fundamental Concepts in Programming Languages == References == == Bibliography == How BCPL evolved from CPL, Martin Richards, 2011 [1] Collected papers of Christopher Strachey, section pertaining to CPL, archived at the Bodleian Library, Oxford; CSAC 71.1.80/C.136-C.184 D. W. Barron, J. N. Buxton, D. F. Hartley, E. Nixon, and C. Strachey. "The main features of CPL" The Computer Journal 6:2:134-143 (1963), available online. J. Buxton, J. C. Gray, and D. Park. CPL Elementary Programming Manual, Edition II (Cambridge) (1966). University of London Institute of Computer Science and The Mathematical Laboratory, Cambridge. CPL Working Papers (1966).
https://en.wikipedia.org/wiki/CPL_(programming_language)
The Advanced SCSI Programming Interface (ASPI) is a programming interface developed by Adaptec which standardizes communication on a computer bus between a SCSI driver module on the one hand and SCSI (and ATAPI) peripherals on the other.: 55–56  == Structure == The ASPI manager software provides an interface between ASPI modules (device drivers or applications with direct SCSI support), a SCSI host adapter, and SCSI devices connected to the host adapter. The ASPI manager is specific to the host adapter and operating system; its primary role is to abstract the host adapter specifics and provide a generic software interface to SCSI devices.: 56  On Windows 9x and Windows NT, the ASPI manager is generic and relies on the services of SCSI miniport drivers. On those systems, the ASPI interface is designed for applications which require SCSI pass-through functionality (such as CD-ROM burning software).: 57  The primary operations supported by ASPI are discovery of host adapters and attached devices, and submitting SCSI commands to devices via SRBs (SCSI Request Blocks).: 233  ASPI supports concurrent execution of SCSI commands.: 231  == History == ASPI was developed by Adaptec around 1989 and was formally introduced in January 1990. Originally supporting only MS-DOS, support for NetWare was added in 1991, while support for OS/2 and Windows 3.x was added in 1992. Originally developed only for SCSI devices, support for ATAPI devices was added later.: 772  Most other SCSI host adapter vendors (for example BusLogic, DPT, AMI, Future Domain, DTC) shipped their own ASPI managers with their hardware. Adaptec also developed generic SCSI disk and CD-ROM drivers for DOS (ASPICD.SYS and ASPIDISK.SYS).: 60–61  At least a couple of other programming interfaces for SCSI device drivers competed with ASPI in the early 1990s, including CAM (Common Access Method), developed by Apple; and Layered Device Driver Architecture, developed by Microsoft. However, ASPI was far and away more common than any of its competitors in this space, with PC Magazine declaring it a de facto standard for developing SCSI device drivers only two years after its introduction. Starting in 1995, Microsoft licensed the interface for use with their Windows 9x operating systems. At the same time Microsoft developed SCSI Pass Through Interface (SPTI), an in-house substitute that worked on the NT platform. Microsoft did not include ASPI in Windows 2000/XP, in favor of its own SPTI. To support USB drives under DOS, Panasonic developed a universal ASPI driver (USBASPI.SYS) that bypasses the lack of native USB support by DOS. == Drivers == ASPI was provided by the following drivers: == See also == SCSI Pass-Through Direct (SPTD) == References ==
https://en.wikipedia.org/wiki/Advanced_SCSI_Programming_Interface
In computer science, control flow (or flow of control) is the order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language. Within an imperative programming language, a control flow statement is a statement that results in a choice being made as to which of two or more paths to follow. For non-strict functional languages, functions and language constructs exist to achieve the same result, but they are usually not termed control flow statements. A set of statements is in turn generally structured as a block, which in addition to grouping, also defines a lexical scope. Interrupts and signals are low-level mechanisms that can alter the flow of control in a way similar to a subroutine, but usually occur as a response to some external stimulus or event (that can occur asynchronously), rather than execution of an in-line control flow statement. At the level of machine language or assembly language, control flow instructions usually work by altering the program counter. For some central processing units (CPUs), the only control flow instructions available are conditional or unconditional branch instructions, also termed jumps. == Categories == The kinds of control flow statements supported by different languages vary, but can be categorized by their effect: Continuation at a different statement (unconditional branch or jump) Executing a set of statements only if some condition is met (choice - i.e., conditional branch) Executing a set of statements zero or more times, until some condition is met (i.e., loop - the same as conditional branch) Executing a set of distant statements, after which the flow of control usually returns (subroutines, coroutines, and continuations) Stopping the program, preventing any further execution (unconditional halt) == Primitives == === Labels === A label is an explicit name or number assigned to a fixed position within the source code, and which may be referenced by control flow statements appearing elsewhere in the source code. A label marks a position within source code and has no other effect. Line numbers are an alternative to a named label used in some languages (such as BASIC). They are whole numbers placed at the start of each line of text in the source code. Languages which use these often impose the constraint that the line numbers must increase in value in each following line, but may not require that they be consecutive. For example, in BASIC: In other languages such as C and Ada, a label is an identifier, usually appearing at the start of a line and immediately followed by a colon. For example, in C: The language ALGOL 60 allowed both whole numbers and identifiers as labels (both linked by colons to the following statement), but few if any other ALGOL variants allowed whole numbers. Early Fortran compilers only allowed whole numbers as labels. Beginning with Fortran-90, alphanumeric labels have also been allowed. === Goto === The goto statement (a combination of the English words go and to, and pronounced accordingly) is the most basic form of unconditional transfer of control. Although the keyword may either be in upper or lower case depending on the language, it is usually written as: goto label The effect of a goto statement is to cause the next statement to be executed to be the statement appearing at (or immediately after) the indicated label. Goto statements have been considered harmful by many computer scientists, notably Dijkstra. === Subroutines === The terminology for subroutines varies; they may alternatively be known as routines, procedures, functions (especially if they return results) or methods (especially if they belong to classes or type classes). In the 1950s, computer memories were very small by current standards so subroutines were used mainly to reduce program size. A piece of code was written once and then used many times from various other places in a program. Today, subroutines are more often used to help make a program more structured, e.g., by isolating some algorithm or hiding some data access method. If many programmers are working on one program, subroutines are one kind of modularity that can help divide the work. === Sequence === In structured programming, the ordered sequencing of successive commands is considered one of the basic control structures, which is used as a building block for programs alongside iteration, recursion and choice. == Minimal structured control flow == In May 1966, Böhm and Jacopini published an article in Communications of the ACM which showed that any program with gotos could be transformed into a goto-free form involving only choice (IF THEN ELSE) and loops (WHILE condition DO xxx), possibly with duplicated code and/or the addition of Boolean variables (true/false flags). Later authors showed that choice can be replaced by loops (and yet more Boolean variables). That such minimalism is possible does not mean that it is necessarily desirable; computers theoretically need only one machine instruction (subtract one number from another and branch if the result is negative), but practical computers have dozens or even hundreds of machine instructions. Other research showed that control structures with one entry and one exit were much easier to understand than any other form, mainly because they could be used anywhere as a statement without disrupting the control flow. In other words, they were composable. (Later developments, such as non-strict programming languages – and more recently, composable software transactions – have continued this strategy, making components of programs even more freely composable.) Some academics took a purist approach to the Böhm–Jacopini result and argued that even instructions like break and return from the middle of loops are bad practice as they are not needed in the Böhm–Jacopini proof, and thus they advocated that all loops should have a single exit point. This purist approach is embodied in the language Pascal (designed in 1968–1969), which up to the mid-1990s was the preferred tool for teaching introductory programming in academia. The direct application of the Böhm–Jacopini theorem may result in additional local variables being introduced in the structured chart, and may also result in some code duplication. Pascal is affected by both of these problems and according to empirical studies cited by Eric S. Roberts, student programmers had difficulty formulating correct solutions in Pascal for several simple problems, including writing a function for searching an element in an array. A 1980 study by Henry Shapiro cited by Roberts found that using only the Pascal-provided control structures, the correct solution was given by only 20% of the subjects, while no subject wrote incorrect code for this problem if allowed to write a return from the middle of a loop. == Control structures in practice == Most programming languages with control structures have an initial keyword which indicates the type of control structure involved. Languages then divide as to whether or not control structures have a final keyword. No final keyword: ALGOL 60, C, C++, Go, Haskell, Java, Pascal, Perl, PHP, PL/I, Python, PowerShell. Such languages need some way of grouping statements together: ALGOL 60 and Pascal: begin ... end C, C++, Go, Java, Perl, PHP, and PowerShell: curly brackets { ... } PL/I: DO ... END Python: uses indent level (see Off-side rule) Haskell: either indent level or curly brackets can be used, and they can be freely mixed Lua: uses do ... end Final keyword: Ada, APL, ALGOL 68, Modula-2, Fortran 77, Mythryl, Visual Basic. The forms of the final keyword vary: Ada: final keyword is end + space + initial keyword e.g., if ... end if, loop ... end loop APL: final keyword is :End optionally + initial keyword, e.g., :If ... :End or :If ... :EndIf, Select ... :End or :Select ... :EndSelect, however, if adding an end condition, the end keyword becomes :Until ALGOL 68, Mythryl: initial keyword spelled backwards e.g., if ... fi, case ... esac Fortran 77: final keyword is END + initial keyword e.g., IF ... ENDIF, DO ... ENDDO Modula-2: same final keyword END for everything Visual Basic: every control structure has its own keyword. If ... End If; For ... Next; Do ... Loop; While ... Wend == Choice == === If-then-(else) statements === Conditional expressions and conditional constructs are features of a programming language that perform different computations or actions depending on whether a programmer-specified Boolean condition evaluates to true or false. IF..GOTO. A form found in unstructured languages, mimicking a typical machine code instruction, would jump to (GOTO) a label or line number when the condition was met. IF..THEN..(ENDIF). Rather than being restricted to a jump, any simple statement, or nested block, could follow the THEN key keyword. This a structured form. IF..THEN..ELSE..(ENDIF). As above, but with a second action to be performed if the condition is false. This is one of the most common forms, with many variations. Some require a terminal ENDIF, others do not. C and related languages do not require a terminal keyword, or a 'then', but do require parentheses around the condition. Conditional statements can be and often are nested inside other conditional statements. Some languages allow ELSE and IF to be combined into ELSEIF, avoiding the need to have a series of ENDIF or other final statements at the end of a compound statement. Less common variations include: Some languages, such as early Fortran, have a three-way or arithmetic if, testing whether a numeric value is negative, zero, or positive. Some languages have a functional form of an if statement, for instance Lisp's cond. Some languages have an operator form of an if statement, such as C's ternary operator. Perl supplements a C-style if with when and unless. Smalltalk uses ifTrue and ifFalse messages to implement conditionals, rather than any fundamental language construct. === Case and switch statements === Switch statements (or case statements, or multiway branches) compare a given value with specified constants and take action according to the first constant to match. There is usually a provision for a default action ("else", "otherwise") to be taken if no match succeeds. Switch statements can allow compiler optimizations, such as lookup tables. In dynamic languages, the cases may not be limited to constant expressions, and might extend to pattern matching, as in the shell script example on the right, where the *) implements the default case as a glob matching any string. Case logic can also be implemented in functional form, as in SQL's decode statement. == Loops == A loop is a sequence of statements which is specified once but which may be carried out several times in succession. The code "inside" the loop (the body of the loop, shown below as xxx) is obeyed a specified number of times, or once for each of a collection of items, or until some condition is met, or indefinitely. When one of those items is itself also a loop, it is called a "nested loop". In functional programming languages, such as Haskell and Scheme, both recursive and iterative processes are expressed with tail recursive procedures instead of looping constructs that are syntactic. === Count-controlled loops === Most programming languages have constructions for repeating a loop a certain number of times. In most cases counting can go downwards instead of upwards and step sizes other than 1 can be used. In these examples, if N < 1 then the body of loop may execute once (with I having value 1) or not at all, depending on the programming language. In many programming languages, only integers can be reliably used in a count-controlled loop. Floating-point numbers are represented imprecisely due to hardware constraints, so a loop such as for X := 0.1 step 0.1 to 1.0 do might be repeated 9 or 10 times, depending on rounding errors and/or the hardware and/or the compiler version. Furthermore, if the increment of X occurs by repeated addition, accumulated rounding errors may mean that the value of X in each iteration can differ quite significantly from the expected sequence 0.1, 0.2, 0.3, ..., 1.0. === Condition-controlled loops === Most programming languages have constructions for repeating a loop until some condition changes. Some variations test the condition at the start of the loop; others test it at the end. If the test is at the start, the body may be skipped completely; if it is at the end, the body is always executed at least once. A control break is a value change detection method used within ordinary loops to trigger processing for groups of values. Values are monitored within the loop and a change diverts program flow to the handling of the group event associated with them. DO UNTIL (End-of-File) IF new-zipcode <> current-zipcode display_tally(current-zipcode, zipcount) current-zipcode = new-zipcode zipcount = 0 ENDIF zipcount++ LOOP === Collection-controlled loops === Several programming languages (e.g., Ada, D, C++11, Smalltalk, PHP, Perl, Object Pascal, Java, C#, MATLAB, Visual Basic, Ruby, Python, JavaScript, Fortran 95 and later) have special constructs which allow implicit looping through all elements of an array, or all members of a set or collection. someCollection do: [:eachElement |xxx]. for Item in Collection do begin xxx end; foreach (item; myCollection) { xxx } foreach someArray { xxx } foreach ($someArray as $k => $v) { xxx } Collection<String> coll; for (String s : coll) {} foreach (string s in myStringCollection) { xxx } someCollection | ForEach-Object { $_ } forall ( index = first:last:step... ) Scala has for-expressions, which generalise collection-controlled loops, and also support other uses, such as asynchronous programming. Haskell has do-expressions and comprehensions, which together provide similar function to for-expressions in Scala. === General iteration === General iteration constructs such as C's for statement and Common Lisp's do form can be used to express any of the above sorts of loops, and others, such as looping over some number of collections in parallel. Where a more specific looping construct can be used, it is usually preferred over the general iteration construct, since it often makes the purpose of the expression clearer. === Infinite loops === Infinite loops are used to assure a program segment loops forever or until an exceptional condition arises, such as an error. For instance, an event-driven program (such as a server) should loop forever, handling events as they occur, only stopping when the process is terminated by an operator. Infinite loops can be implemented using other control flow constructs. Most commonly, in unstructured programming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to never end, either by omitting the condition or explicitly setting it to true, as while (true) .... Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop. Examples include Ada (loop ... end loop), Fortran (DO ... END DO), Go (for { ... }), and Ruby (loop do ... end). Often, an infinite loop is unintentionally created by a programming error in a condition-controlled loop, wherein the loop condition uses variables that never change within the loop. === Continuation with next iteration === Sometimes within the body of a loop there is a desire to skip the remainder of the loop body and continue with the next iteration of the loop. Some languages provide a statement such as continue (most languages), skip, cycle (Fortran), or next (Perl and Ruby), which will do this. The effect is to prematurely terminate the innermost loop body and then resume as normal with the next iteration. If the iteration is the last one in the loop, the effect is to terminate the entire loop early. === Redo current iteration === Some languages, like Perl and Ruby, have a redo statement that restarts the current iteration from the start. === Restart loop === Ruby has a retry statement that restarts the entire loop from the initial iteration. === Early exit from loops === When using a count-controlled loop to search through a table, it might be desirable to stop searching as soon as the required item is found. Some programming languages provide a statement such as break (most languages), Exit (Visual Basic), or last (Perl), which effect is to terminate the current loop immediately, and transfer control to the statement immediately after that loop. Another term for early-exit loops is loop-and-a-half. The following example is done in Ada which supports both early exit from loops and loops with test in the middle. Both features are very similar and comparing both code snippets will show the difference: early exit must be combined with an if statement while a condition in the middle is a self-contained construct. Python supports conditional execution of code depending on whether a loop was exited early (with a break statement) or not by using an else-clause with the loop. For example, The else clause in the above example is linked to the for statement, and not the inner if statement. Both Python's for and while loops support such an else clause, which is executed only if early exit of the loop has not occurred. Some languages support breaking out of nested loops; in theory circles, these are called multi-level breaks. One common use example is searching a multi-dimensional table. This can be done either via multilevel breaks (break out of N levels), as in bash and PHP, or via labeled breaks (break out and continue at given label), as in Go, Java and Perl. Alternatives to multilevel breaks include single breaks, together with a state variable which is tested to break out another level; exceptions, which are caught at the level being broken out to; placing the nested loops in a function and using return to effect termination of the entire nested loop; or using a label and a goto statement. C does not include a multilevel break, and the usual alternative is to use a goto to implement a labeled break. Python does not have a multilevel break or continue – this was proposed in PEP 3136, and rejected on the basis that the added complexity was not worth the rare legitimate use. The notion of multi-level breaks is of some interest in theoretical computer science, because it gives rise to what is today called the Kosaraju hierarchy. In 1973 S. Rao Kosaraju refined the structured program theorem by proving that it is possible to avoid adding additional variables in structured programming, as long as arbitrary-depth, multi-level breaks from loops are allowed. Furthermore, Kosaraju proved that a strict hierarchy of programs exists: for every integer n, there exists a program containing a multi-level break of depth n that cannot be rewritten as a program with multi-level breaks of depth less than n without introducing added variables. One can also return out of a subroutine executing the looped statements, breaking out of both the nested loop and the subroutine. There are other proposed control structures for multiple breaks, but these are generally implemented as exceptions instead. In his 2004 textbook, David Watt uses Tennent's notion of sequencer to explain the similarity between multi-level breaks and return statements. Watt notes that a class of sequencers known as escape sequencers, defined as "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. As commonly implemented, however, return sequencers may also carry a (return) value, whereas the break sequencer as implemented in contemporary languages usually cannot. === Loop variants and invariants === Loop variants and loop invariants are used to express correctness of loops. In practical terms, a loop variant is an integer expression which has an initial non-negative value. The variant's value must decrease during each loop iteration but must never become negative during the correct execution of the loop. Loop variants are used to guarantee that loops will terminate. A loop invariant is an assertion which must be true before the first loop iteration and remain true after each iteration. This implies that when a loop terminates correctly, both the exit condition and the loop invariant are satisfied. Loop invariants are used to monitor specific properties of a loop during successive iterations. Some programming languages, such as Eiffel contain native support for loop variants and invariants. In other cases, support is an add-on, such as the Java Modeling Language's specification for loop statements in Java. === Loop sublanguage === Some Lisp dialects provide an extensive sublanguage for describing Loops. An early example can be found in Conversional Lisp of Interlisp. Common Lisp provides a Loop macro which implements such a sublanguage. === Loop system cross-reference table === a while (true) does not count as an infinite loop for this purpose, because it is not a dedicated language structure. a b c d e f g h C's for (init; test; increment) loop is a general loop construct, not specifically a counting one, although it is often used for that. a b c Deep breaks may be accomplished in APL, C, C++ and C# through the use of labels and gotos. a Iteration over objects was added in PHP 5. a b c A counting loop can be simulated by iterating over an incrementing list or generator, for instance, Python's range(). a b c d e Deep breaks may be accomplished through the use of exception handling. a There is no special construct, since the while function can be used for this. a There is no special construct, but users can define general loop functions. a The C++11 standard introduced the range-based for. In the STL, there is a std::for_each template function which can iterate on STL containers and call a unary function for each element. The functionality also can be constructed as macro on these containers. a Count-controlled looping is effected by iteration across an integer interval; early exit by including an additional condition for exit. a Eiffel supports a reserved word retry, however it is used in exception handling, not loop control. a Requires Java Modeling Language (JML) behavioral interface specification language. a Requires loop variants to be integers; transfinite variants are not supported. [1] a D supports infinite collections, and the ability to iterate over those collections. This does not require any special construct. a Deep breaks can be achieved using GO TO and procedures. a Common Lisp predates the concept of generic collection type. == Structured non-local control flow == Many programming languages, especially those favoring more dynamic styles of programming, offer constructs for non-local control flow. These cause the flow of execution to jump out of a given context and resume at some predeclared point. Conditions, exceptions and continuations are three common sorts of non-local control constructs; more exotic ones also exist, such as generators, coroutines and the async keyword. === Conditions === The earliest Fortran compilers had statements for testing exceptional conditions. These included the IF ACCUMULATOR OVERFLOW, IF QUOTIENT OVERFLOW, and IF DIVIDE CHECK statements. In the interest of machine independence, they were not included in FORTRAN IV and the Fortran 66 Standard. However since Fortran 2003 it is possible to test for numerical issues via calls to functions in the IEEE_EXCEPTIONS module. PL/I has some 22 standard conditions (e.g., ZERODIVIDE SUBSCRIPTRANGE ENDFILE) which can be raised and which can be intercepted by: ON condition action; Programmers can also define and use their own named conditions. Like the unstructured if, only one statement can be specified so in many cases a GOTO is needed to decide where flow of control should resume. Unfortunately, some implementations had a substantial overhead in both space and time (especially SUBSCRIPTRANGE), so many programmers tried to avoid using conditions. Common Syntax examples: ON condition GOTO label === Exceptions === Modern languages have a specialized structured construct for exception handling which does not rely on the use of GOTO or (multi-level) breaks or returns. For example, in C++ one can write: Any number and variety of catch clauses can be used above. If there is no catch matching a particular throw, control percolates back through subroutine calls and/or nested blocks until a matching catch is found or until the end of the main program is reached, at which point the program is forcibly stopped with a suitable error message. Via C++'s influence, catch is the keyword reserved for declaring a pattern-matching exception handler in other languages popular today, like Java or C#. Some other languages like Ada use the keyword exception to introduce an exception handler and then may even employ a different keyword (when in Ada) for the pattern matching. A few languages like AppleScript incorporate placeholders in the exception handler syntax to automatically extract several pieces of information when the exception occurs. This approach is exemplified below by the on error construct from AppleScript: David Watt's 2004 textbook also analyzes exception handling in the framework of sequencers (introduced in this article in the section on early exits from loops). Watt notes that an abnormal situation, generally exemplified with arithmetic overflows or input/output failures like file not found, is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" Watt notes that in contrast to status flags testing, exceptions have the opposite default behavior, causing the program to terminate unless the program deals with the exception explicitly in some way, possibly by adding explicit code to ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers are less suitable as a dedicated exception sequencer with the semantics discussed above. In Object Pascal, D, Java, C#, and Python a finally clause can be added to the try construct. No matter how control leaves the try the code inside the finally clause is guaranteed to execute. This is useful when writing code that must relinquish an expensive resource (such as an opened file or a database connection) when finished processing: Since this pattern is fairly common, C# has a special syntax: Upon leaving the using-block, the compiler guarantees that the stm object is released, effectively binding the variable to the file stream while abstracting from the side effects of initializing and releasing the file. Python's with statement and Ruby's block argument to File.open are used to similar effect. All the languages mentioned above define standard exceptions and the circumstances under which they are thrown. Users can throw exceptions of their own; C++ allows users to throw and catch almost any type, including basic types like int, whereas other languages like Java are less permissive. === Continuations === === Async === C# 5.0 introduced the async keyword for supporting asynchronous I/O in a "direct style". === Generators === Generators, also known as semicoroutines, allow control to be yielded to a consumer method temporarily, typically using a yield keyword (yield description) . Like the async keyword, this supports programming in a "direct style". === Coroutines === Coroutines are functions that can yield control to each other - a form of co-operative multitasking without threads. Coroutines can be implemented as a library if the programming language provides either continuations or generators - so the distinction between coroutines and generators in practice is a technical detail. === Non-local control flow cross reference === == Proposed control structures == In a spoof Datamation article in 1973, R. Lawrence Clark suggested that the GOTO statement could be replaced by the COMEFROM statement, and provides some entertaining examples. COMEFROM was implemented in one esoteric programming language named INTERCAL. Donald Knuth's 1974 article "Structured Programming with go to Statements", identifies two situations which were not covered by the control structures listed above, and gave examples of control structures which could handle these situations. Despite their utility, these constructs have not yet found their way into mainstream programming languages. === Loop with test in the middle === The following was proposed by Dahl in 1972: loop loop xxx1 read(char); while test; while not atEndOfFile; xxx2 write(char); repeat; repeat; If xxx1 is omitted, we get a loop with the test at the top (a traditional while loop). If xxx2 is omitted, we get a loop with the test at the bottom, equivalent to a do while loop in many languages. If while is omitted, we get an infinite loop. The construction here can be thought of as a do loop with the while check in the middle. Hence this single construction can replace several constructions in most programming languages. Languages lacking this construct generally emulate it using an equivalent infinite-loop-with-break idiom: while (true) { xxx1 if (not test) break xxx2 } A possible variant is to allow more than one while test; within the loop, but the use of exitwhen (see next section) appears to cover this case better. In Ada, the above loop construct (loop-while-repeat) can be represented using a standard infinite loop (loop - end loop) that has an exit when clause in the middle (not to be confused with the exitwhen statement in the following section). Naming a loop (like Read_Data in this example) is optional but permits leaving the outer loop of several nested loops. === Multiple early exit/exit from nested loops === This construct was proposed by Zahn in 1974. A modified version is presented here. exitwhen EventA or EventB or EventC; xxx exits EventA: actionA EventB: actionB EventC: actionC endexit; exitwhen is used to specify the events which may occur within xxx, their occurrence is indicated by using the name of the event as a statement. When some event does occur, the relevant action is carried out, and then control passes just after endexit. This construction provides a very clear separation between determining that some situation applies, and the action to be taken for that situation. exitwhen is conceptually similar to exception handling, and exceptions or similar constructs are used for this purpose in many languages. The following simple example involves searching a two-dimensional table for a particular item. exitwhen found or missing; for I := 1 to N do for J := 1 to M do if table[I,J] = target then found; missing; exits found: print ("item is in table"); missing: print ("item is not in table"); endexit; == Security == One way to attack a piece of software is to redirect the flow of execution of a program. A variety of control-flow integrity techniques, including stack canaries, buffer overflow protection, shadow stacks, and vtable pointer verification, are used to defend against these attacks. == See also == == Notes == == References == == Further reading == Hoare, C. A. R. "Partition: Algorithm 63," "Quicksort: Algorithm 64," and "Find: Algorithm 65." Comm. ACM 4, 321–322, 1961. == External links == Media related to Control flow at Wikimedia Commons Go To Statement Considered Harmful A Linguistic Contribution of GOTO-less Programming "Structured Programming with Go To Statements" (PDF). Archived from the original (PDF) on 2009-08-24. (2.88 MB) "IBM 704 Manual" (PDF). (31.4 MB)
https://en.wikipedia.org/wiki/Control_flow