text
stringlengths
11
320k
source
stringlengths
26
161
OpenLR is a royalty-free open standard for "procedures and formats for the encoding, transmission, and decoding of local data irrespective of the map" developed by TomTom . The format allows locations localised on one map to be found on another map to which the data have been transferred. [ 1 ] OpenLR requires that the coordinates are specified in the WGS 84 format and that route links are given in metres. Also, all routes need to be assigned to a "functional road class". The specification is described in a white paper licensed under a Creative Commons license . Additionally, TomTom has published an open-source library for the format under the Apache license . [ 2 ] The Traveller Information Services Association (TISA) adopted OpenLR for the TPEG 2 standard, albeit with some modifications to align it with the conventions and principles of TPEG2. While functionally equivalent to the TomTom specification, the TISA adaptation differs in the XML structure and some field names (e.g. by use of the term properties where TomTom uses attributes ); the binary format is also different. TISA’s version of the specification was subsequently adopted as ISO 21219-22:2017. [ 3 ] OpenLR is one of multiple location referencing methods supported by Datex2 . While functionally equivalent, the Datex2 adaptation of the format is not interoperable with either the TomTom or the TISA/ISO specification at the XML level. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/OpenLR
OpenMS is an open-source project for data analysis and processing in mass spectrometry and is released under the 3-clause BSD licence . It supports most common operating systems including Microsoft Windows , MacOS and Linux . [ 2 ] [ 3 ] OpenMS has tools for analysis of proteomics data, providing algorithms for signal processing, feature finding (including de-isotoping), visualization in 1D (spectra or chromatogram level), 2D and 3D, map mapping and peptide identification. It supports label-free and isotopic-label based quantification (such as iTRAQ and TMT and SILAC ). OpenMS also supports metabolomics workflows and targeted analysis of DIA/SWATH data. [ 2 ] Furthermore, OpenMS provides tools for the analysis of cross linking data, including protein-protein, protein-RNA and protein-DNA cross linking. Lastly, OpenMS provides tools for analysis of RNA mass spectrometry data. OpenMS was originally released in 2007 in version 1.0 and was described in two articles published in Bioinformatics in 2007 and 2008 and has since seen continuous releases. [ 4 ] [ 5 ] In 2009, the visualization tool TOPPView was published [ 6 ] and in 2012, the workflow manager and editor TOPPAS was described. [ 7 ] In 2013, a complete high-throughput label-free analysis pipeline using OpenMS 1.8 was described and compared with similar, proprietary software (such as MaxQuant and Progenesis QI ). The authors conclude that "[...] all three software solutions produce adequate and largely comparable quantification results; all have some weaknesses, and none can outperform the other two in every aspect that we examined. However, the performance of OpenMS is on par with that of its two tested competitors [...]". [ 8 ] The OpenMS 1.10 release contained several new analysis tools, including OpenSWATH (a tool for targeted DIA data analysis ), a metabolomics feature finder and a TMT analysis tool. Furthermore, full support for TraML 1.0.0 and the search engine MyriMatch were added. [ 9 ] The OpenMS 1.11 release was the first release to contain fully integrated bindings to the Python programming language (termed pyOpenMS). [ 10 ] In addition, new tools were added to support QcML (for quality control) and for metabolomics accurate mass analysis. Multiple tools were significantly improved with regard to memory and CPU performance. [ 11 ] With OpenMS 2.0, released in April 2015, the project provides a new version that has been completely cleared of GPL code and uses git (in combination with GitHub ) for its version control and ticketing system. Other changes include support for mzIdentML, mzQuantML and mzTab while improvements in the kernel allow for faster access to data stored in mzML and provide a novel API for accessing mass spectrometric data. [ 12 ] In 2016, the new features of OpenMS 2.0 were described in an article in Nature Methods . [ 2 ] In 2024, OpenMS 3.0 [ 3 ] was released, providing support for a wide array of data analysis task in proteomics, metabolomics and MS-based transcriptomics. OpenMS is currently developed with contributions from the group of Knut Reinert [ 13 ] at the Free University of Berlin , the group of Oliver Kohlbacher [ 14 ] at the University of Tübingen and the group of Hannes Roest [ 15 ] at University of Toronto . OpenMS provides a set of over 100 different executable tools than can be chained together into pipelines for mass spectrometry data analysis (the TOPP Tools). It also provides visualization tools for spectra and chromatograms (1D), mass spectrometric heat maps (2D m/z vs RT ) as well as a three-dimensional visualization of a mass spectrometry experiment. Finally, OpenMS also provides a C++ library (with bindings to Python available since 1.11) for LC/MS data management and analyses accessible to developers to create new tools and implement their own algorithms using the OpenMS library. OpenMS is free software available under the 3-clause BSD licence (previously under the LGPL). Among others, it provides algorithms for signal processing, feature finding (including de-isotoping), visualization, map mapping and peptide identification. It supports label-free and isotopic-label based quantification (such as iTRAQ and TMT and SILAC ). The following graphical applications are part an OpenMS release:
https://en.wikipedia.org/wiki/OpenMS
OpenMath is the name of a markup language for specifying the meaning of mathematical formulae . Among other things, it can be used to complement MathML , a standard which mainly focuses on the presentation of formulae, with information about their semantic meaning. OpenMath can be encoded in XML or in a binary format. OpenMath consists of the definition of "OpenMath Objects", which is an abstract datatype for describing the logical structure of a mathematical formula and the definition of "OpenMath Content Dictionaries", or collections of names for mathematical concepts. The names available from the latter type of collections are specifically intended for use in extending MathML, and conversely, a basic set of such "Content Dictionaries" has been designed to be compatible with the small set of mathematical concepts defined in Content MathML, the non-presentational subset of MathML. OpenMath has been developed in a long series of workshops and (mostly European) research projects that began in 1993 and continues through today. The OpenMath 1.0 Standard was released in February 2000, and revised as OpenMath 1.1 in October 2002. Two years later, the OpenMath 2.0 Standard was released in June 2004. OpenMath 1 fixed the basic language architecture, while OpenMath2 brought better XML integration, structure sharing and liberalized the notion of OpenMath Content dictionaries. The OpenMath Effort is governed by the OpenMath Society, based in Helsinki , Finland . The Society brings together tool builders, software suppliers, publishers and authors. Membership is by invitation of the Societies Executive Committee, which welcomes self-nominations of individuals who have worked on OpenMath-related issues in research or application. As of 2007, Michael Kohlhase is president of the OpenMath society. He succeeded Arjeh M. Cohen, who was the first president. The well-known quadratic formula : would be marked up like this in OpenMath (the representation is an expression tree made up from functional elements like OMA for function application or OMV for variables): In the expression tree above symbols—i.e. elements like <OMS cd="arith1" name="times"/> —stand for mathematical functions that are applied to sibling expressions in an OMA which are interpreted as arguments. The OMS element is a generic extension element that means whatever is specified in the content dictionary referred to in the cd attribute (this document can be found at the URI specified in the innermost cdbase attribute dominating the respective OMS element. In the example above, all symbols come from the content dictionary for arithmetics ( arith1 , see below), except for the plusminus , which comes from a non-standard place, hence the cdbase attribute here. Content Dictionaries are structured XML documents that define mathematical symbols that can be referred to by OMS elements in OpenMath Objects. The OpenMath 2 standard does not prescribe a canonical encoding for content dictionaries, but only requires an infrastructure sufficient for unique referencing in OMS elements. OpenMath provides a very basic XML encoding that meets these requirements, and a set of specific content dictionaries for some areas of mathematics, in particular covering the K-14 fragment covered by content MathML. For more richly structured content dictionaries (and generally for arbitrary mathematical documents) the OMDoc format extends OpenMath by a “statement level” (including structures like definitions, theorems, proofs and examples, as well as means for interrelating them) and a “theory level”, where a theory is a collection of several contextually related statements. OMDoc's theories are designed to be compatible to OpenMath content dictionaries, but they can also be set into inheritance and import relations. OpenMath is criticised for being inadequate for general mathematics, exposing not enough formal precision to capture the intricacies of numerics, lacking a proof-of-concept and as an inferior technology to already established approaches of encoding mathematical semantics, amongst other presumed shortcomings. [ 1 ]
https://en.wikipedia.org/wiki/OpenMath
OpenNMS is a free and open-source enterprise grade network monitoring and network management platform. It is developed and supported by a community of users and developers and by the OpenNMS Group, offering commercial services, training and support. The goal is for OpenNMS to be a truly distributed, scalable management application platform for all aspects of the FCAPS network management model while remaining 100% free and open source. Currently the focus is on Fault and Performance Management . All code associated with the project is available under the Affero General Public License . The OpenNMS Project is maintained by The Order of the Green Polo. The OpenNMS Project was started in July, 1999 by Steve Giles, Brian Weaver and Luke Rindfuss and their company PlatformWorks . [ 2 ] It was registered as project 4141 on SourceForge in March 2000. [ 3 ] [ 4 ] On September 28, 2000, PlatformWorks was acquired by Atipa, a Kansas City-based competitor to VA Linux Systems. [ 5 ] In July 2001, Atipa changed its name to Oculan . [ 6 ] In September 2002, Oculan decided to stop supporting the OpenNMS project. Tarus Balog, then an Oculan employee, left the company to continue to focus on the project. [ 7 ] In September 2004, The OpenNMS Group was started by Balog, Matt Brozowski and David Hustace to provide a commercial services and support business around the project. Shortly after that, The Order of the Green Polo (OGP) was founded to manage the OpenNMS Project itself. [ 8 ] While many members of the OGP are also employees of The OpenNMS Group, it remains a separate organization. OpenNMS is written in Java , and thus can run on any platform with support for a Java SDK version 11 or higher. [ 9 ] Precompiled binaries are available for most Linux distributions. In addition to Java, it requires the PostgreSQL database, although work is being done to make the application database independent by leveraging the Hibernate project. OpenNMS describes itself as a "network management application platform". [ 10 ] While useful when first installed, the software was designed to be highly customizable to work in a wide variety of network environments. There are four main functional areas of OpenNMS. OpenNMS is based around a " publish and subscribe " message bus. Processes within the software can publish events, and other processes can subscribe to them. In addition, OpenNMS can receive events in the form of SNMP Traps, syslog messages, TL/1 events or custom messages sent as XML to port 5817. Events can be configured to generate alarms. [ 11 ] While events represent a history of information from the network, alarms can be used to create correlation workflow (resolving "down" alarms when matching "up" alarms are created) and performing "event reduction" by representing multiple, identical events as a single alarm with a counter. Alarms can also generate events of their own, such as when an alarm is escalated in severity. Alarms clear from the system over time, unlike events that persist as long as desired. The Alarm subsystem can also integrate with a variety of trouble ticketing systems, such as Request Tracker , OTRS , Jira , and Remedy. The software also contains an Event Translator where incoming events can be augmented with additional data (such as the impact to customers) and turned into new events. [ 12 ] Events can generate notifications via e-mail, SMS , XMPP and custom notification methods. OpenNMS has been shown to be able to process 125,000 syslog messages per minute, continuously. [ 13 ] OpenNMS contains an advanced provisioning system for adding devices to the management system. This process can occur automatically by submitting a list or range of IP addresses to the system (both IPv4 and IPv6 ). Devices can also be expressly added to the system. The underlying technology for this configuration is XML, so users can either use the web-based user interface or they can automate the process by scripting the creation of the XML configuration files. The provisioning system contains adapters to integrate with other processes within the application and to external software, such as a Dynamic DNS server and RANCID . The provisioning process is asynchronous for scalability, and has been shown to provision networks of more than 50,000 discrete devices and to networks of single devices with over 200,000 virtual interfaces, each ( Juniper E320 ). [ 14 ] The service assurance features of OpenNMS allow for the availability of network-based services to be determined. The types of monitors span from the very simple ( ICMP pings, TCP port checks) to the complex (Page Sequence Monitoring, [ 15 ] Mail Transport Monitor [ 16 ] ). Outage information is stored in the database and can be used to generate availability reports. In addition to being able to monitor network services from the point of view of the OpenNMS server, remote pollers can be deployed to measure availability from distant locations. Papa John's Pizza uses the OpenNMS remote poller software in each of its nearly 3000 retail stores to measure the availability of centralized network resources. [ 17 ] Performance data collection exists in OpenNMS for a number of network protocols including SNMP, HTTP , JMX , WMI , XMP, XML, NSClient, and JDBC . Data can be collected, stored, graphed and checked against thresholds. The process is highly scalable, and one instance of OpenNMS is collecting 1.2 million data points via SNMP every five minutes. [ 18 ] OpenNMS is accessed via a web-based user interface built on Jetty . An integration with JasperReports creates high level reports from the database and collected performance data.
https://en.wikipedia.org/wiki/OpenNMS
Open PHACTS (Open Pharmacological Concept Triple Store ) was a European initiative public–private partnership between academia, publishers, enterprises, pharmaceutical companies and other organisations working to enable better, cheaper and faster drug discovery . [ 1 ] [ 2 ] [ 3 ] [ 4 ] It has been funded by the Innovative Medicines Initiative , [ 5 ] selected as part of three projects to "design methods for common standards and sharing of data for more efficient drug development and patient treatment in the future". [ 6 ] [ 7 ] A total of 27 partners were involved including: The Open Pharmacological Space created by the consortium intended to support open innovation and in-house non-public drug discovery research [ 8 ] by removing bottlenecks in drug development . [ 9 ] [ 10 ] Resources from the project are publicly available on GitHub . [ 11 ] To reduce the barriers to drug discovery in industry, academia and for small businesses, the Open PHACTS consortium built the Open PHACTS Discovery Platform. This platform was freely available, integrating pharmacological data from a variety of information resources and providing tools and services to question this integrated data to support pharmacological research.
https://en.wikipedia.org/wiki/OpenPHACTS
OpenPicus was an Italian hardware company launched in 2011 that designed and produced Internet of Things system on modules called Flyport. Flyport is open hardware and the openPicus framework and IDE are open software . [ 1 ] [ 2 ] Flyport is a stand-alone system on module, no external processor is needed to create IoT applications. The company ceased operations in 2018. OpenPicus was founded by Claudio Carnevali and Gabriele Allegria during 2011. The idea was to create a hardware and software open platform to speed up the development of professional IoT devices and services. [ citation needed ] By the end of 2018, OpenPicus Wiki and all relative Open Hardware info disappeared from internet as founders of OpenPicus now promote the brand name IOmote converting their knowledge to real business. Some old info (wiki, tutorials, etc.) for OpenPicus boards can be recovered via Internet Archive Wayback Machine . [ citation needed ] Flyport is a smart and connected system on modules for the Internet of Things . Flyport is powered by a powerful and light open source framework (based on FreeRTOS ) that manages the TCP/IP software stack, the user application and the integrated web server . Flyport is available in 3 pin compatible versions: [ 3 ] Flyport system on module is based on Microchip Technology PIC24 low power processor. It is used to connect and control systems over Internet through an embedded customizable web server or the standard TCP/IP services. The integrated microcontroller runs the customer application, so no host processor is needed. The pinout is very flexible since it is customizable by software. Flyport can connect with several cloud servers such as Evrthng, Xively , ThingSpeak and many more. Hardware: Schematics are released under CC BY 3.0 Software: Framework is released under LGPL 3.0
https://en.wikipedia.org/wiki/OpenPicus
Open Quantum Assembly Language ( OpenQASM ; pronounced open kazm ) [ 1 ] is a programming language designed for describing quantum circuits and algorithms for execution on quantum computers. It is designed to be an intermediate representation that can be used by higher-level compilers to communicate with quantum hardware, and allows for the description of a wide range of quantum operations, as well as classical feed-forward flow control based on measurement outcomes. The language includes a mechanism for describing explicit timing of instructions, and allows for the attachment of low-level definitions to gates for tasks such as calibration. [ 1 ] OpenQASM is not intended for general-purpose classical computation, and hardware implementations of the language may not support the full range of data manipulation described in the specification. Compilers for OpenQASM are expected to support a wide range of classical operations for compile-time constants, but the support for these operations on runtime values may vary between implementations. [ 2 ] The language was first described in a paper published in July 2017, [ 1 ] and a reference source code implementation was released as part of IBM 's Quantum Information Software Kit ( Qiskit ) for use with their IBM Quantum Experience cloud quantum computing platform. [ 3 ] The language has similar qualities to traditional hardware description languages such as Verilog . OpenQASM defines its version at the head of a source file as a number, as in the declaration: The level of OpenQASM's original published implementations is OpenQASM 2.0. Version 3.0 of the specification is the current one and can be viewed at the OpenQASM repository on GitHub . [ 4 ] The following is an example of OpenQASM source code from the official library. The program adds two four-bit numbers. [ 5 ]
https://en.wikipedia.org/wiki/OpenQASM
openSAFETY is a communications protocol used to transmit information that is crucial for the safe operation of machinery in manufacturing lines, process plants, or similar industrial environments. Such information may be e.g. an alert signal triggered when someone or something has breached a light curtain on a factory floor. While traditional safety solutions rely on dedicated communication lines connecting machinery and control systems via special relays , openSAFETY does not need any extra cables reserved for safety-related information. It is a bus-based protocol that allows for passing on safety data over existing Industrial Ethernet connections between end devices and higher-level automation systems – connections principally established and used for regular monitoring and control purposes. Unlike other bus-based safety protocols that are suitable for use only with a single or a few specific Industrial Ethernet implementations and are incompatible with other systems, openSAFETY works with a wide range of different Industrial Ethernet variants. openSAFETY is certified according to IEC 61508 [ 1 ] and meets the requirements of SIL 3 applications. The protocol has been approved by national IEC committees representing over two dozen countries around the world, and has been released for international standardization in IEC 61784-3 FSCP 13 . [ 2 ] [ 3 ] openSAFETY supports functional features to enable fast data transfer such as direct communication between nodes on a network ( cross-traffic ) as well as a range of measures needed to ensure data integrity and accuracy, e.g. time stamps, unique data packet identifiers, and others. [ 4 ] One particularly notable characteristic is openSAFETY's encapsulation of safety data within an Ethernet frame: [ 5 ] two subframes, each being an identical duplicate of the other, are combined to form the full safety frame. Each of the subframes is secured by its own checksum , which in effect provides multiple safeguards and levels of redundancy to ensure any distortions of safety data or other types of faults cannot go unnoticed. [ 6 ] In contrast to all other bus-based safety solutions on the market, which were created to complement a specific Industrial Ethernet protocol or family of bus systems, openSAFETY was designed for general interoperability. Though openSAFETY was conceived by the Ethernet POWERLINK Standardization Group (EPSG) and originally developed as a safety companion to that organization’s own Industrial Ethernet variant, POWERLINK , the safety protocol is no longer bound to POWERLINK. Instead, it can be used with various major Industrial Ethernet implementations, namely PROFINET , SERCOS III , EtherNet/IP , Modbus-TCP , and POWERLINK. [ 7 ] This broad compatibility with about 90% of the installed base of Industrial Ethernet installations in 2010 [ 8 ] is achieved because openSAFETY operates only on the topmost (application) layer of the network, where safety data can be trafficked irrespective of specific network characteristics that may differ from one underlying bus system to another. This approach is commonly known as " black channel " operation in communication protocol engineering . [ 9 ] A relatively late arrival on the scene, [ 10 ] openSAFETY was first released in 2009. It is based on its immediate precursor technology, POWERLINK Safety, which was originally launched in 2007. openSAFETY won broad public attention in April 2010, when a presentation at the Hannover Messe trade show in Germany showcased four different implementations of the safety solution running in SERCOS III, Modbus TCP, EtherNet/IP and POWERLINK environments. [ 11 ] The public presentation and open-source release of the protocol was hotly debated, with strong reactions both in favor and against the new solution, which prompted extensive reporting in the trade press. [ 12 ] Following the major openSAFETY presentation in Hanover, proponents of the new solution gave lectures at other industry events as well, e.g. at TÜV Rheinland ’s 9th International Symposium in Cologne, Germany, on 4–5 May 2010. Speaking at this conference on Functional Safety in Industrial Applications , Stefan Schönegger of Austria’s Bernecker + Rainer Industrie-Elektronik Ges.m.b.H. ( B&R ), a co-creator and major advocate of openSAFETY, provided an introduction to key characteristics of the new protocol. [ 13 ] Reports on later gatherings indicate that the focus of presentations and discussions about the protocol soon shifted to specific implementation and applicability issues. [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/OpenSafety
OpenStructures is an open source modular construction model based on a shared geometrical grid, called the OS grid. It was conceived by designer Thomas Lommée , and first demonstrated at the Z33 , a house for contemporary art. [ 1 ] [ 2 ] According to Lommee, the OpenStructures project explores the possibility of a modular system where "everyone designs for everyone." OpenStructures is developing a database where anyone can share designs which are in turn available for download by the public. Each component design in the OS system will feature previously designed OS parts that were used to create it. In addition, each part will feature component designs that can be made from it. The OpenStructures model includes large and small scale manufacturers as well as craftsmen. They are invited to create their own designs according to the OS standard for sale on the market, which can in turn be fixed or disassembled at their end of life and made into new products. [ 3 ] The OpenStructures grid is built around a square of 4 x 4 cm and is scalable. The squares can be further subdivided or put together to form larger squares, without losing inter-compatibility. The image shows nine complete squares of each 4x4 cm put together. Designers use the OS grid to determine dimensions, assembly points, and interconnecting diameters. This allows parts that were not originally from the same design to be used together in a new design. OpenStructures works at several scales , and analogies are made to biological systems including (from smallest to biggest): [ 1 ] One of the research areas of OpenStructures is architecture. Architects of the Brussels Cooperation Collective [ 7 ] have worked on the subject. [ 8 ] [ 9 ] Autarchitecture ( West Flemish : autarkytecture , from Ancient Greek auto ' self ' and architecture ) is based in OpenStructures and proposes flexible constructions that can adapt over time. [ 10 ] Open smart brick elements and buildings can be based on OpenStructures. Fab lab Academy had been building several beehives making use of the OpenStructures system to make them more sustainable. [ 11 ] This computer-aided design software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/OpenStructures
SiteScope is agentless monitoring software focused on monitoring the availability and performance of distributed IT infrastructures, including Servers , Network devices and services , Applications and application components , operating systems and various IT enterprise components. [ 1 ] SiteScope was originally written by Freshwater Software in 1996, a company acquired by Mercury Interactive in 2001. [ 2 ] Mercury Interactive was subsequently acquired by Hewlett-Packard (HP) in 2006. [ 3 ] Version 10.10 was released in July 2009. [ 4 ] The current version is 2018.08 (11.51). SiteScope is now marketed by OpenText after its acquisition of Micro Focus . SiteScope tests a web page or a series of web pages using synthetic monitoring . [ 5 ] However, it is not limited to web applications and can be used to monitor database servers ( Oracle Database , Microsoft SQL Server , etc.), Unix servers, Microsoft Windows servers and many other types of hardware and software. It can export the collected data in real time to OpenText LoadRunner or it can be used in standalone mode. SiteScope supports more than 100 types of application [ 6 ] in physical and virtual environments and can monitor servers, databases, applications, networks, web transactions, streaming technology and integration technology, as well as generic elements including files, scripts and directories. [ 7 ] SiteScope monitoring supports mid-tier processes, URLs, utilization of servers and response time of the mid-tier processes. Users can set thresholds for specific characteristics and be alerted for critical or warning conditions. [ 8 ] HPE Software merged with Micro Focus in September 2017 and Micro Focus was later acquired by OpenText in January 2023. The latest release of SiteScope is version 11.92. Prepackaged monitors include CPU Utilization Monitor, DNS Monitor, Directory Monitor, Disk Space Monitor, Log File Monitor, Memory Monitor, Network Monitor, Ping Monitor, Port Monitor, Script Monitor, Service Monitor, URL Monitor, URL List Monitor, URL Sequence Monitor, Web Server Monitor, WebLogic Application Server Monitor and threshold values. [ 9 ] SiteScope comes with solution templates for monitoring IT infrastructure elements, including Oracle, Microsoft Exchange Server , SAP , WebLogic, and Unix and Linux operating systems. [ 10 ] Solution templates are for rapidly deploying specific monitoring based on best practice methodologies. [ 11 ] Solution templates deploy a combination of standard SiteScope monitor types and solution-specific monitors with settings that are optimized for monitoring the availability, performance, and health of the target application or system. For example, the solutions for Microsoft Exchange monitoring include performance counter, event log, MAPI, and Exchange application specific monitor types. [ 6 ] The following is an overview of the steps for using Solution Templates in SiteScope. Manage View SiteScope 11.32 comes with Manage View component in HTML5 for monitoring IT infrastructure elements, The Manage view in the Unified Console provides Self-Service functionality (Monitoring as a Service) to non-admin users, and reduces the amount of monitoring support required from the SiteScope administrator or monitoring team. Manage UI provide mobility: support tablets, most commonly used browsers.
https://en.wikipedia.org/wiki/OpenText_SiteScope
OpenTherm ( OT ) is a standard communications protocol used in central heating systems for the communication between central heating appliances and a thermostatic controllers. [ 1 ] As a standard, OpenTherm is independent of any single manufacturer. A controller from one manufacturer can in principle be used to control a boiler from another. However, OpenTherm controllers and boilers do not always work properly together. The OpenTherm standard comprises a number of optional features and some devices may include manufacturer-specific features. The presence or absence of such features may impair compatibility with other OpenTherm devices. OpenTherm was founded in 1996 because multiple manufacturers needed a simple-to-use communicating system between room controller and boiler. It had to run, like the existing controllers, over the existing two wires, not polarity sensitive, without the use of batteries . For one British pound , Honeywell sold the first specification to the OpenTherm Association in November 1996. [ citation needed ] Shortly after, the first products appeared on the market. By 2008 the Association had grown to around 42 members and has regularly updated and improved the specification. Furthermore, the Association is also active in lobbying for the interests of its members and is also present at exhibitions like the ISH ( Frankfurt ) and the Mostra Convegno ( Milan ). As of 2016 [update] , the association has 53 members from around the world. [ 2 ] OpenTherm appliances are mainly used in Europe. [ 3 ] Communication is digital and bi-directional between the controller (primary) and the boiler (secondary). Various commands and kinds of information can be transferred; however, the most basic command is to set the boiler's target water temperature. OpenTherm makes use of a traditional untwisted 2-wire cable between controller and boiler. The protocol is not polarity sensitive: wires can be swapped. [ 4 ] The maximum wiring length is 50 m up to maximum 2 x 5 ohm resistance. For backward compatibility with traditional switching thermostatic controllers, OpenTherm specified that if the two wires are connected together then the boiler will switch on. Due to the secondary supplying power over the two wires, the controller does not require its own power connection. [ 4 ] The primary sends out a 32-bit signal every second, to which the secondary sends an acknowledgement message: [ 4 ] Specification 3.0 also describes how more than two devices can be connected by OpenTherm. Whilst OpenTherm is a point-to-point connection, an extra device (gateway) is added between the primary and the secondary. This gateway has 1 secondary and 1 (or more) primary interfaces. The gateway controls which data is passed to each secondary. An application example is a room temperature controller connected to a heat recovery unit, which is connected to a boiler. The heat recovery unit is then functioning as gateway. In another possible configuration, a thermostat or room controller is connected to a sequencer with further Opentherm interfaces connected to more than one boiler. The room controller can be a standard unit, since it only 'sees' one heat-producer. The sequencer includes additional software to increase or decrease the number of running boilers to match the actual heat demand. The sequencer also needs a sensor to measure the temperature of the combined output from the boilers and usually would also control a main circulation pump. What happens after a fault occurs (resequencing remaining units, passing fault messages through for display on the room controller, etc.) is also part of the sequencer functionality. (The hydraulic design of such a system must also take account of different combinations of boilers running at the same time: a Low Loss Header / Hydraulic Separator is usually included to combine the flows from the boilers.) The two wires are used both to supply power to the controller and for bidirectional digital communication between the controller and the boiler. The minimum available power is 35 mW. When using OpenTherm Smart Power this can, by primary request, also be 136 mW (medium power) or 255 mW (high power). The controller transmits to the boiler by sending a Manchester-encoded sequence in the Voltage domain. The boiler transmits data back to the controller in the current domain. OpenTherm specifies a maximum communications interval of one second. The data in the communication packet is functionally specified and is called OpenTherm-ID (OT-ID). 256 OT-IDs are available, 128 are reserved for OEM use. The other 128 are reserved, 90 of them are functionally specified. (OT specification v3.0) When OT/- is used the primary generates a PWM voltage signal, representing the boiler water temperature set point. The boiler current signal indicates the status of the boiler: error, no error. Due to the limited possibilities OT/- is rarely used . [ citation needed ] On June 16, 2008, OpenTherm specification 3.0 was approved by the association. This version introduces OpenTherm Smart Power. The primary can request the secondary to change the available power to low, medium or high power. With this primary manufacturers can add more functionality to their products (backlight or extra sensors). Manufacturers are allowed to market OpenTherm products when they comply with some rules of the OpenTherm association. Most importantly the manufacturer has to be an OpenTherm member, and the product must be tested by an independent testing body. By handing over the test report and a Declaration of Conformity to the association, the manufacturer is allowed to use the OpenTherm logo.
https://en.wikipedia.org/wiki/OpenTherm
OpenTofu is a software project for infrastructure as code that is managed by the Linux Foundation . [ 2 ] The last MPL-licensed version of Terraform was forked as OpenTofu in August 2023 after HashiCorp announced that all products produced by the company would be relicensed under the Business Source License (BUSL), with HashiCorp prohibiting commercial use of the community edition by those who offer "competitive services". In April 2024, HashiCorp sent a cease and desist notice to the OpenTofu project, stating that it had incorporated code from a BUSL-licensed version of Terraform without permission and "incorrectly re-labeled HashiCorp's code to make it appear as if it was made available by HashiCorp originally under a different license." OpenTofu denied the allegation, stating that the code cited had originated from an MPL-licensed version of Terraform. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/OpenTofu
OpenViBE is a software platform dedicated to designing, testing and using brain-computer interfaces . The package includes a Designer tool to create and run custom applications, along with several pre-configured and demo programs which are ready for use. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] OpenViBE is software for real-time neuroscience (that is, for real-time processing of brain signals ). It can be used to acquire, filter, process, classify and visualize brain signals in real time. The main OpenViBE application fields are medical (assistance to disabled people, real-time biofeedback , neurofeedback , real-time diagnosis ), multimedia ( virtual reality , video games ), robotics and all other application fields related to brain-computer interfaces and real-time neurosciences . OpenViBE users can either be programmers or people not familiar with programming. This includes medical doctors , video game developers , researchers in signal processing or robotics, etc. Since 2012, the start-up Mensia Technologies has developed an advanced version of the software called NeuroRT Suite. [ 7 ] The user interface of OpenVibe is easy to use for creating BCI scenarios and saving them for later use, to access and to manipulate. OpenVibe is the first library of functions written in C++ of this type developed by INRIA - Institut national de recherche en informatique et automatique (France) - it can be integrated and applied quickly and easily .
https://en.wikipedia.org/wiki/OpenVibe
OpenWebNet is a communications protocol developed by Bticino since 2000. The OpenWebNet protocol allows a "high-level" interaction between a remote unit and Bus SCS of MyHome domotic system. The latest protocol evolution has been improved to allow interaction with well-known home automation systems like KNX and DMX512-A system, by using appropriate gateways . The OpenWebNet protocol is disclosed on MyOpen community. The protocol is thought to be independent from the used technology. For example, it is possible to use a supervisor software connected via Ethernet , via serial RS-232 or via USB to a gateway that is directly connected to a domotic system. One can require protocol message extension. It is enough to propose their own RFC . It will be examined and disclosed if it respects OpenWebNet syntax. An OpenWebNet message is structured with variable length fields separated by the special character '*' and closed by '##'. The characters admitted in the fields are numbers and the character “#”. The structure of a message is therefore: The following fields are admitted: WHO WHAT WHERE DIMENSION VALUE WHO It characterizes the domotic system function to which the OpenWebNet message is referred. For example: WHO = 1, characterizes the messages for lighting system management. WHAT It characterizes an action to do or a status to read. For every WHO (and therefore for every function) there is a specific WHAT table. The field WHAT can also contain optional parameters: WHAT#PAR1#PAR2… #PARn. Example of actions: switch ON light, dimmer to 75%, switch DOWN shutter, radio ON, etc. Example of status: light ON, active alarm, battery unload, etc. WHERE It characterizes the set of objects to which the OpenWebNet message is referred. It can be a single object, a group of objects, a specific environment, the entire system, etc. For every WHO (and therefore for every function) there is a specified WHERE table. The tag WHERE can also contain optional parameters: WHERE#PAR1#PAR2… #PARn. Example of where: all the lights of group 1, sensor 2 of zone 1 of alarm system, etc. DIMENSION Is a range of value that characterizes the dimension of the object to which the message is referred. For every WHO (and therefore for every function) there is a specific DIMENSION table. It’s possible to require/to read/to write the value of one dimension. Every dimension has a prefixed number of values, described in VALUE field. Example of dimension: sensor temperature, loudspeaker volume, firmware version of a device, etc. VALUE It characterizes the read/written value of a written/required/read dimension. There are 4 types of OpenWebNet Message: Command / Status Message Status Request Message Request/Read/Write Dimension Message Request: Read: Write: Acknowledge Message ACK: NACK: It is possible to interact with the SCS home automation bus by using a specific gateway . There are two typologies of gateways that allow a connection to the field bus by using different standard protocols : The current implementation by BTicino is also an embedded web server . It works as a translator between OpenWebNet messages via TCP/IP and the SCS messages transmitted on the SCS bus. It is possible to control three different kinds of buses: The Ethernet gateway offers two modes of authentication: Usually, the default port for the Ethernet gateway is 20000 even if the registered port for the protocol is 20005. The gateway is an interface that works as a translator between the OpenWebNet messages transmitted on USB or serial and the SCS messages transmitted on the SCS bus. OpenWebNet message examples Command Message Switch Off of light 77 WHO = 1 WHAT = 0 WHERE = 77 Status Message Scenario 1 of scenario unit 23 activated. WHO = 0 WHAT = 1 WHERE = 23 Request status message Status request of probe 1 WHO = 4 WHERE = 1 Request Dimension Message Request of Temperature Measured, probe 44 WHO = 4 WHERE = 44 DIMENSION = 0 Read Dimension Message Temperature Measured, probe 44 WHO = 4 WHERE = 44 DIMENSION = 0 VALUE1 = 0251 (T=+25,1 °C) VALUE2 = 2 (System in "cooling mode") Write Dimension Message Volume set at 50%, environment 2 WHO = #16 WHAT = #2 WHERE = #1 DIMENSION = 1 VALUE1 = 16
https://en.wikipedia.org/wiki/OpenWebNet
The Open Wireless Movement hosted at OpenWireless.org is an Internet activism project which seeks to increase Internet access by encouraging people and organizations to configure or install software on their own wireless router to offer a separate public guest network or to make a single public wireless access point . If many people did this, then a ubiquitous global public wireless network would be created which would achieve and surpass the goal of increasing Internet access. The project was initiated in November 2012 by a coalition of ten advocacy groups including the Electronic Frontier Foundation (EFF), Fight for the Future , Free Press , Internet Archive , NYCwireless, Open Garden , OpenITP, the Open Spectrum Alliance, the Open Technology Institute , and the Personal Telco Project . [ 1 ] EFF representative Adi Kamdar commented, "We envision a world where sharing one's Internet connection is the norm. A world of open wireless would encourage privacy, promote innovation, and largely benefit the public good. And everyone—users, businesses, developers, and Internet service providers—can get involved." [ 1 ] As of September 2016, seventeen groups have joined the project, adding Engine , Mozilla , Noisebridge , the Open Rights Group , OpenMedia International, Sudo Room , and the Center for Media Justice . The project uses various strategies to encourage and assist people to make their Internet connections available for public use. It explains the benefits and drawbacks of the effects on society and on the owners of routers, answers questions regarding safety and legality, guides novice users in configuring their routers, and provides firmware for novices to install on their routers. [ citation needed ] The EFF created a router firmware called OpenWireless, a fork of CeroWRT, [ 2 ] which is a branch of the OpenWrt firmware . [ 3 ] which anyone may volunteer to install on their router to make it work for the OpenWireless.org project. [ 4 ] [ 5 ] [ 6 ] [ 7 ] This firmware was first shared at the 2014 Hackers on Planet Earth conference. [ 6 ] Its developers set out to achieve simple installation on a wide range of hardware routers but struggled with the diversity of closed, proprietary devices, and development of the OpenWireless firmware ended in April 2015 and was merged into Linux kernel and OpenWRT , openwireless.org redirects to eff.org . [ 5 ] [ 8 ] "In particular, once we obtained our first field data on router prevalence, we saw that none of the router models we expected to be able to support well have market shares above around 0.1%. Though we anticipated a fragmented market, that extreme degree of router diversity means that we would need to support dozens of different hardware platforms in order to be available to any significant number of users, and that does not seem to be an efficient path to pursue. Without a good path to direct deployment, EFF is deprioritizing our work on the freestanding router firmware project." [ 9 ]
https://en.wikipedia.org/wiki/OpenWireless.org
Open Babel is a free chemical informatics software designed to facilitate the conversion of Chemical file formats and manage molecular data. [ 3 ] It serves as a chemical expert system , widely used in fields such as cheminformatics , molecular modelling , and computational chemistry . Open Babel provides both a comprehensive library and command-line utilities, making it a versatile tool for researchers, developers, and professionals. [ 4 ] Due to the strong relationship to informatics this program belongs more to the category cheminformatics than to molecular modelling . It is available for Windows , Unix , Linux , macOS , and Android . It is free and open-source software released under a GNU General Public License (GPL) 2.0. The project's stated goal is: "Open Babel is a community-driven scientific project assisting both users and developers as a cross-platform program and library designed to support molecular modeling, chemistry, and many related areas, including interconversion of file formats and data." Open Babel and JOELib were derived from the OELib cheminformatics library. In turn, OELib was based on ideas in the original chemistry program Babel and an unreleased object-oriented programming library called OBabel . In cheminformatics, Open Babel facilitates the management of molecular data through substructure searching and molecular fingerprint calculations. These functionalities enable similarity analysis, dataset clustering, and efficient organization of chemical libraries, making it suitable for large-scale workflows. In drug discovery, Open Babel supports tasks such as preparing chemical libraries for high-throughput virtual screening and standardizing molecular formats for structure-based drug design. The software's ability to generate 3D molecular coordinates and calculate molecular descriptors is particularly valuable in predicting properties such as solubility, reactivity, and toxicity. [ 7 ]
https://en.wikipedia.org/wiki/Open_Babel
The Open Base Station Architecture Initiative ( OBSAI ) was a trade association created by Hyundai , LG Electronics , Nokia , Samsung and ZTE in September 2002 with the aim of creating an open market for cellular network base stations. The hope was that an open market would reduce the development effort and costs traditionally associated with creating base station products. The OBSAI specifications provided the architecture, function descriptions and minimum requirements for integration of a set of common modules into a base transceiver station (BTS). It: This was intended to provide the BTS integrator with flexibility. A version 2.0 system reference document was published in 2006. [ 1 ] The OBSAI Reference Architecture defines four functional blocks, interfaces between them, and requirements for external interfaces. A base transceiver station (BTS) has four main blocks or logical entities: Radio Frequency (RF) block, Baseband block, Control and Clock block, and Transport block. The Radio Frequency Block sends and receives signals to/from portable devices (via the air interface) and converts between digital data and antenna signal. Some of the main functions are D/A and A/D conversion, up/down conversion, carrier selection, linear power amplification, diversity transmit and receive, RF combining and RF filtering. The Baseband Block processes the baseband signal. The functions include encoding/decoding, ciphering/deciphering, frequency hopping (GSM), spreading and Rake receiver (WCDMA), MAC (WiMAX), protocol frame processing, MIMO etc. The Transport Block interfaces to external network, and provides functions such as QoS , security functions and synchronization. Coordination between these three blocks is maintained by the Control and Clock Block . Internal interfaces between the functional blocks are called reference points (RP). RP1 is the interface that allows communication between the control block and the other three blocks. It includes control and clock signals. RP1 specification also specifies UDPCP - a UDP based reliable communication protocol. A version 2.1 of the reference point 1 interface was published in 2008. [ 2 ] RP2 provides a link between the transport and baseband blocks. Version 2.1 of the reference point 2 interface was published in 2008. [ 3 ] RP3 is the interface between baseband block and RF block. RP3-01 is an (alternate) interface between Local Converter and Remote RF block. Version 4.2 of the reference point 3 interface was published in 2010. [ 4 ] RP4 provides the DC power interface between the internal modules and DC power sources. Version 1.1 of the reference point 4 interface was published in 2010. [ 5 ] Most of the industry at the time revolved around achieving lower cost RF modules and power amplifiers (PA), as these two components usually account for nearly 50 percent of the BTS cost. Consequently, OBSAI works to define reference point 3 (RP3) prior to the other reference points to promote more competitive sources in the RF module and PA market. Transport Block provides external network interface to operator network. Examples are: (lub) to the Radio Network Controller (RNC) for 3GPP systems, R6 to the Access Services Network Gateway (centralized Gateway) or R3 to Connectivity Services Network (CSN) for WiMAX systems. RF Block provides external radio interface to subscriber devices. Examples are Uu or Um to the user equipment (UE) for 3GPP systems or R1 for WiMAX. Common Public Radio Interface (CPRI), an alternative, competing, standard.
https://en.wikipedia.org/wiki/Open_Base_Station_Architecture_Initiative
Open Bionics is a UK-based company that develops low-cost, 3D printed bionic arms for amputees with below elbow amputations (more formally known as myoelectric prostheses ). Their bionic arms are fully functional with lights, bio feedback vibrations, and different functions that allow the user to grab, pinch, high-five, fist bump, and thumbs-up. The company is based inside Future Space, co-located with Bristol Robotics Laboratory . [ 1 ] The company was founded in 2014 by Joel Gibbard MBE and Samantha Payne MBE . [ 2 ] In 2020 Joel Gibbard and Samantha Payne were awarded MBEs for their services to Innovation, Engineering, and Technology. Open Bionics grew out of the Open Hand project created by Joel Gibbard after studying robotics at the University of Plymouth . [ 3 ] The project aimed to use 3D printing to create hand prostheses. Samantha Payne had interviewed him as a reporter in Bristol covering social impact stories and was keen to have a social impact herself. They founded Open Bionics together in 2014. [ 2 ] In 2018 they were named the Hottest Startup Founders in Europe at the Europa Awards. [ 4 ] In late 2023, Open Bionics expanded its clinical presence in the United States, with clinics located in Denver, Los Angeles, Orlando, Austin, Chicago, and New York City. [ 5 ] The first product, the Hero Arm, was differentiated not only by its relatively low price given the functionality but also by making a bold positive feature of the artificial arm, rather than disguising it to look like a natural body part. [ 2 ] Each arm is 3D printed to the user's specific measurements and muscle sensors control servo-actuated movement of the fingers. Key features include 6 different grip types, 180-degree wrist rotation, magnetically attached swappable decorative covers, adjustable fit to compensate for limb expansion (e.g. with temperature), and a ventilated liner. [ 6 ] Users have access to a Sidekick App developed by Calvium with interactive training guides and personalization controls. [ 7 ] In 2025, Open Bionics launched new models of the Hero Arm, the Hero Pro & Hero RGD. These hands are wireless and waterproof, and work when detached from the wearer. [ 8 ] In 2015, Disney and Open Bionics announced a partnership to create superhero-themed prosthetics for young amputees. [ 9 ] In the same year, the company won the 2015 James Dyson Award in the UK for innovative engineering [ 10 ] [ 11 ] and Tech4Good's 2015 Accessibility Award. [ 12 ] [ 13 ] In 2016, it won a Bloomberg Business Innovators award. [ 14 ] [ 15 ] In January 2019, James Cameron and 20th Century Fox partnered with Open Bionics to give 13-year-old double amputee Tilly Lockey a pair of Alita -inspired bionic Hero Arms for the London premiere of Alita: Battle Angel . [ 16 ] Lockey lost both of her hands when she developed meningococcal sepsis at 15 months of age. [ 17 ] In 2020, Open Bionics partnered with gaming company Konami to create 'Venom Snake' Hero Arm covers, which are featured in the 2015 video game Metal Gear Solid V: The Phantom Pain . [ 18 ] In 2023, Open Bionics collaborated with Ukraine charity Superhumans Center to fit Ukrainian soldiers with bionic Hero Arms as a result of the ongoing Russian invasion of Ukraine [ 19 ] In January 2019, Open Bionics raised Series A funding of $5.9 million. [ 20 ] [ 21 ] The round was led by Foresight Williams Technology EIS Fund, Ananda Impact Ventures and Downing Ventures, with participation from F1's Williams Advanced Engineering Group among others. [ 22 ] [ 23 ] This article about a company of the UK is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Bionics
Open Blueprint was an IBM framework developed in the early 1990s (and released in March 1992) that provided a standard for connecting network computers . [ 1 ] The open blueprint structure reduced redundancy by combining protocols. [ 2 ] [ 3 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Blueprint
Open Bug Bounty is a non-profit bug bounty platform established in 2014. The coordinated vulnerability disclosure platform allows independent security researchers to report XSS and similar security vulnerabilities on any website they discover using non-intrusive security testing techniques. [ 1 ] The researchers may choose to make the details of the vulnerabilities public in 90 days since vulnerability submission or to communicate them only to the website operators. The program's expectation is that the operators of the affected website will reward the researchers for making their reports. Unlike commercial bug bounty programs, Open Bug Bounty is a non-profit project and does not require payment by either the researchers or the website operators. Any bounty is a matter of agreement between the researchers and the website operators. Heise.de identified the potential for the website to be a vehicle for blackmailing website operators with the threat of disclosing vulnerabilities if no bounty is paid, but reported that Open Bug Bounty prohibits this. [ 2 ] Open Bug Bounty was launched by private security enthusiasts in 2014, and as of February 2017 had recorded 100,000 vulnerabilities, of which 35,000 had been fixed. [ 3 ] It grew out of the website XSSPosed, an archive of cross-site scripting vulnerabilities. [ 4 ] In February 2018, the platform had 100,000 fixed vulnerabilities using coordinated disclosure program based on ISO 29147 guidelines. [ 5 ] Up to the end of 2019, the platform reported 272,020 fixed vulnerabilities using coordinated disclosure program based on ISO 29147 guidelines. [ 6 ]
https://en.wikipedia.org/wiki/Open_Bug_Bounty
Open Catalog Interface ( OCI ) is an open standard for a software interface developed by SAP for punch-out catalogs that connect buyers' procurement systems with suppliers' eCommerce systems. [ 1 ] [ 2 ] OCI is an alternative to cXML . It is used by SAP Supplier Relationship Management , Microsoft Dynamics AX and other Enterprise resource planning and purchasing systems when connecting to external punch-out catalogs. In the open catalogue project Open Icecat , a separate OCI is defined for the exchange of multimedia data between multilingual product catalogs. The OCI format is used to define the field mapping between the supplier's catalog and the SAP SRM shopping cart , to ensure that the data is transferred accurately and completely between source and receiver. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Catalog_Interface
Open Cluster Framework ( OCF ) is a set of standards for computer clustering . The project started as a working group of the Free Standards Group , now part of the Linux Foundation . Original supporters included several computing companies and groups, including Compaq , Conectiva , IBM , Linux-HA , MSC Software , the Open Source Development Lab , OSCAR , Red Hat , SGI and SUSE . [ 1 ] OCF Resource agents are currently supported by Linux-HA Heartbeat, the high-availability cluster software. [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Cluster_Framework
The Open Computing Facility is a student organization at the University of California, Berkeley , and a chartered program of the ASUC . Founded in 1989, the OCF is an all-volunteer, student-run organization dedicated to providing free and accessible computing resources to all members of the University community. [ 1 ] The mission of the OCF is "to provide an environment where no member of Berkeley's campus community is denied the computer resources he or she seeks, to appeal to all members of the Berkeley campus community with unmet computing needs, and to provide a place for those interested in computing to fully explore that interest." [ 1 ] The OCF provides the following services, among others, to UC Berkeley students, staff, alumni, and affiliates: [ 2 ] To further the OCF's goal of promoting accessibility, the OCF publishes its board meeting minutes, [ 4 ] tech talks, [ 5 ] and Unix system administration DeCal materials [ 3 ] online for all to see and use. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Computing_Facility
The Open Data-Link Interface ( ODI ) is an application programming interface (API) for network interface controllers (NICs) developed by Apple and Novell . The API serves the same function as Microsoft and 3COM's Network Driver Interface Specification (NDIS). [ 1 ] Originally, ODI was written for NetWare and Macintosh environments. Like NDIS, ODI provides rules that establish a vendor-neutral interface between the protocol stack and the adapter driver. It resides in Layer 2, the Data Link layer, of the OSI model . This interface also enables one or more network drivers to support one or more protocol stacks . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Data-Link_Interface
The Open Energy Modelling Initiative ( openmod ) is a grassroots community of energy system modellers from universities and research institutes across Europe and elsewhere. The initiative promotes the use of open-source software and open data in energy system modelling for research and policy advice. The Open Energy Modelling Initiative documents a variety of open-source energy models and addresses practical and conceptual issues regarding their development and application. The initiative runs an email list , an internet forum , and a wiki and hosts occasional academic workshops. A statement of aims is available. [ 1 ] The application of open-source development to energy modelling dates back to around 2003. This section provides some background for the growing interest in open methods. Just two active open energy modelling projects were cited in a 2011 paper: OSeMOSYS and TEMOA. [ 2 ] : 5861 Balmorel was also public at that time, having been made available on a website in 2001. [ b ] As of November 2016, [update] the openmod wiki lists 24 such undertakings. [ 3 ] As of October 2021, [update] the Open Energy Platform lists 17 open energy frameworks and about 50 open energy models. This 2012 paper presents the case for using "open, publicly accessible software and data as well as crowdsourcing techniques to develop robust energy analysis tools". [ 4 ] : 149 The paper claims that these techniques can produce high-quality results and are particularly relevant for developing countries. There is an increasing call for the energy models and datasets used for energy policy analysis and advice to be made public in the interests of transparency and quality. [ 5 ] A 2010 paper concerning energy efficiency modeling argues that "an open peer review process can greatly support model verification and validation, which are essential for model development". [ 6 ] : 17 [ 7 ] One 2012 study argues that the source code and datasets used in such models should be placed under publicly accessible version control to enable third-parties to run and check specific models. [ 8 ] Another 2014 study argues that the public trust needed to underpin a rapid transition in energy systems can only be built through the use of transparent open-source energy models. [ 9 ] The UK TIMES project (UKTM) is open source, according to a 2014 presentation, because "energy modelling must be replicable and verifiable to be considered part of the scientific process" and because this fits with the "drive towards clarity and quality assurance in the provision of policy insights". [ 10 ] : 8 In 2016, the Deep Decarbonization Pathways Project (DDPP) is seeking to improve its modelling methodologies, a key motivation being "the intertwined goals of transparency, communicability and policy credibility." [ 11 ] : S27 A 2016 paper argues that model-based energy scenario studies, wishing to influence decision-makers in government and industry, must become more comprehensible and more transparent. To these ends, the paper provides a checklist of transparency criteria that should be completed by modelers. The authors note however that they "consider open source approaches to be an extreme case of transparency that does not automatically facilitate the comprehensibility of studies for policy advice." [ 12 ] : 4 An editorial from 2016 opines that closed energy models providing public policy support "are inconsistent with the open access movement [and] publically [ sic ] funded research". [ 13 ] : 2 A 2017 paper lists the benefits of open data and models and the reasons that many projects nonetheless remain closed. The paper makes a number of recommendations for projects wishing to transition to a more open approach. The authors also conclude that, in terms of openness, energy research has lagged behind other fields, most notably physics, biotechnology, and medicine. [ 14 ] Moreover: Given the importance of rapid global coordinated action on climate mitigation and the clear benefits of shared research efforts and transparently reproducible policy analysis, openness in energy research should not be for the sake of having some code or data available on a website, but as an initial step towards fundamentally better ways to both conduct our research and engage decision-makers with [our] models and the assumptions embedded within them. [ 14 ] : 214 A one-page opinion piece in Nature News from 2017 advances the case for using open energy data and modeling to build public trust in policy analysis. The article also argues that scientific journals have a responsibility to require that data and code be submitted alongside text for scrutiny, currently only Energy Economics makes this practice mandatory within the energy domain. [ 15 ] Issues surrounding copyright remain at the forefront with regard to open energy data. Most energy datasets are collated and published by official or semi-official sources, for example, national statistics offices , transmission system operators , and electricity market operators . The doctrine of open data requires that these datasets be available under free licenses (such as CC BY 4.0 ) or be in the public domain . But most published energy datasets carry proprietary licenses, limiting their reuse in numerical and statistical models, open or otherwise. Measures to enforce market transparency have not helped because the associated information is normally licensed to preclude downstream usage. Recent transparency measures include the 2013 European energy market transparency regulation 543/2013 [ 16 ] and a 2016 amendment to the German Energy Industry Act [ 17 ] to establish a nation energy information platform, slated to launch on 1 July 2017. Energy databases may also be protected under general database law , irrespective of the copyright status of the information they hold. [ 18 ] In December 2017, participants from the Open Energy Modelling Initiative and allied research communities made a written submission to the European Commission on the re-use of public sector information . [ 19 ] The document provides a comprehensive account of the data issues faced by researchers engaged in open energy system modeling and energy market analysis and quoted extensively from a German legal opinion. [ 20 ] In May 2020, participants from the Open Energy Modelling Initiative made a further submission on the European strategy for data. [ 21 ] [ 22 ] In mid‑2021, participants made two written submissions on a proposed Data Act — legislative work-in-progress intended primarily to improve public interest business-to-government (B2G) information transfers within the European Economic Area (EEA). [ 23 ] [ 24 ] More specifically, the two Data Act submissions drew attention to restrictive but nonetheless compliant public disclosure reporting practices deployed by the European Energy Exchange (EEX). In May 2016, the European Union announced that "all scientific articles in Europe must be freely accessible as of 2020". [ 25 ] This is a step in the right direction, but the new policy makes no mention of open software and its importance to the scientific process. [ 26 ] In August 2016, the United States government announced a new federal source code policy which mandates that at least 20% of custom source code developed by or for any agency of the federal government be released as open-source software (OSS). [ 27 ] The US Department of Energy (DOE) is participating in the program. The project is hosted on a dedicated website and subject to a three-year pilot. [ 27 ] [ 28 ] Open-source campaigners are using the initiative to advocate that European governments adopt similar practices. [ 29 ] In 2017 the Free Software Foundation Europe (FSFE) issued a position paper calling for free software and open standards to be central to European science funding, including the flagship EU program Horizon 2020 . The position paper focuses on open data and open data processing and the question of open modeling is not traversed per se. [ 30 ] A trend evident by 2023 is the adoption of regulators within the European Union and North America. Fairley (2023), writing in the IEEE Spectrum publication, provides an overview. [ 31 ] And as one example, the Canada Energy Regulator is using the PyPSA framework for systems analysis. [ 32 ] The Open Energy Modelling Initiative participants take turns to host regular academic workshops. The Open Energy Modelling Initiative also holds occasional specialist meetings. Related to openmod Open energy data Similar initiatives Other
https://en.wikipedia.org/wiki/Open_Energy_Modelling_Initiative
The Open Graphics Project ( OGP ) was founded with the goal to design an open-source hardware / open architecture and standard for graphics cards , primarily targeting free software / open-source operating systems. The project created a reprogrammable development and prototyping board and had aimed to eventually produce a full-featured and competitive end-user graphics card. The project's first product was a PCI graphics card dubbed OGD1, which used a field-programmable gate array (FPGA) chip. Although the card did not have the same level of performance or functionality as graphics cards on the market at the time, it was intended to be useful as a tool for prototyping the project's first application-specific integrated circuit (ASIC) board, as well as for other professionals needing programmable graphics cards or FPGA-based prototyping boards. It was also hoped that this prototype would attract enough interest to gain some profit and attract investors for the next card, since it was expected to cost around US$2,000,000 to start the production of a specialized ASIC design. PCI Express and/or Mini-PCI variations were planned to follow. The OGD1 began shipping in September 2010, [ 1 ] some six years after the project began and 3 years after the appearance of the first prototypes. [ 2 ] Full specifications will be published and open-source device drivers will be released. All RTL will be released. Source code to the device drivers and BIOS will be released under the MIT and BSD licenses . The RTL (in Verilog ) used for the FPGA and the RTL used for the ASIC are planned to be released under the GNU General Public License (GPL). It has 256 MiB of DDR RAM, is passively cooled, and follows the DDC , EDID , DPMS and VBE VESA standards. TV-out is also planned. Versioning schema for OGD1 will go like this: {Root Number} – {Video Memory}{Video Output Interfaces}{Special Options e.g.: A1 OGA firmware installed} Main components of OGD1 graphics card (shown on the picture) [ 3 ] The OGP project failed to gain the necessary funding to produce an ASIC version of its card. The project appears to have been discontinued in 2011.
https://en.wikipedia.org/wiki/Open_Graphics_Project
The Open Grid Services Infrastructure ( OGSI ) was published by the Global Grid Forum (GGF) as a proposed recommendation in June 2003. [ 1 ] It was intended to provide an infrastructure layer for the Open Grid Services Architecture (OGSA) . OGSI takes the statelessness issues (along with others) into account by essentially extending Web services to accommodate grid computing resources that are both transient and stateful. Web services groups started to integrate their own approaches to capturing state into the Web Services Resource Framework (WSRF). With the release of GT4, the open source tool kit is migrating back to a pure Web services implementation (rather than OGSI), via integration of the WSRF. [ 2 ] "OGSI, which was the former set of extensions to Web services to provide stateful interactions -- I would say at this point is obsolete," Jay Unger said. "That was the model that was used in the Globus Toolkit 3.0, but it's been replaced by WSRF, WS-Security, and the broader set of Web services standards. But OGSA, which focuses on specific service definition in the areas of execution components, execution modeling, grid data components, and information virtualization, still has an important role to play in the evolution of standards and open source tool kits like Globus." [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Grid_Services_Infrastructure
The Open Handset Alliance ( OHA ) is a consortium led by Google that develops Android . [ 1 ] Its member firms included HTC , Sony , Dell , Intel , Motorola , Qualcomm , Texas Instruments , Samsung Electronics , LG Electronics (formerly), T-Mobile , Nvidia , and Wind River Systems . [ 2 ] The OHA was established on November 5, 2007, with 34 members, [ 2 ] including mobile handset makers, application developers, some mobile network operators and chip makers. [ 3 ] As part of its efforts to promote a unified Android platform, OHA members are contractually forbidden from producing devices that are based on competing forks of Android. [ 4 ] [ 5 ] At the same time as the announcement of the formation of the Open Handset Alliance on November 5, 2007, the OHA also unveiled the Android Open Source Project , an open-source mobile phone platform based on the Linux kernel. [ 2 ] An early look at the Android SDK was released to developers on November 12, 2007. [ 6 ] The first commercially available phone running Android was the HTC Dream (also known as the T-Mobile G1). It was approved by the Federal Communications Commission (FCC) on August 18, 2008, [ 7 ] and became available on October 22 of that year. [ 8 ] The members of the Open Handset Alliance are:
https://en.wikipedia.org/wiki/Open_Handset_Alliance
The Open Hardware and Design Alliance ( OHANDA ) aims at encouraging the sharing of open hardware and designs. The core of the project is a free online service where manufacturers of Open hardware and designs can register their products with a common label. This label maps the four freedoms of Free Software to physical devices and their documentation. It is similar to a non-registered trademark for hardware and can be compared to other certificates such as U.S. Federal Communications Commission (FCC) or CE mark . OHANDA thus has the role of a self-organized registration authority. The Open Hardware and Design Alliance has rewritten the four freedoms of free software as follows to match them to hardware resp. hardware documentation: The idea to create a label that makes open hardware and designs recognizable is because copyright and copyleft are hard to realize in the context of physical devices. Instead of going through the lengthy and expensive process of patenting hardware to make it open, hardware developers and designers can put their products under a public domain license by registering them on the OHANDA website. They can license their work under their own names and keep the devices' reuse open. The procedure is the following: A hardware designer or manufacturer creates an account on the OHANDA website to get a unique producer ID. This account can either be for a person or for an organization. The terms and conditions he accepts to use the label imply that she grants the Four Freedoms to the users. The documentation of the product must be published under a "copyleft" or public domain license. Next, the manufacturer registers the product or design. A unique product ID will be issued. This ID is also referred to as the "OKEY". Now the manufacturer or designer can print or engrave the OHANDA label and the OKEY onto the device. This way, the device always carries the link to the open documentation and to all past contributors. Via the OHANDA website, users can trace back the artefact. At the same time, the label makes the openness of the product visible. Everyone is free to change the device and to share the new design with a new product ID on the website. The development can be seen by following the associations online. [ 1 ] The idea of creating a label for open source hardware came up at the GOSH! Summit (Grounding Open Source Hardware) at Banff Centre in Banff, Alberta in July 2009. [ 2 ] Since then, the active community members developed the project website [ 3 ] where OHANDA-labeled hardware can be registered. OHANDA launched a sticker campaign: The stickers show a crossed out closed box, symbolizing closed "black boxes". The stickers are meant to be put on all sorts of devices to make visible how little open source devices exist. In 2011, OHANDA community members met at the Piksel11 festival in Bergen / Norway . [ 4 ] Since then, they have been using the term "reables" as a replacement for "Free/Libre Open Source Hardware". [ 5 ]
https://en.wikipedia.org/wiki/Open_Hardware_and_Design_Alliance
The Open Insulin Project is a community of researchers and advocates working to develop an open-source protocol for producing insulin that is affordable, has transparent pricing, and is community-owned. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The Open Insulin Project was started in 2015 by Anthony Di Franco, himself a type 1 diabetic . [ 5 ] He started the project in response to the unreasonably high prices of insulin in the US. The project has been housed in Counter Culture Labs , a community laboratory and makerspace in the Bay Area. [ 6 ] Other collaborators include ReaGent , BioCurious and BioFoundry . The project aims to develop both the methodology and hardware to allow communities and individuals to produce medical-grade insulin for the treatment of diabetes. [ 7 ] These methods will be low-cost in order to combat the high price of insulin in places like the US. There is also potential for small-scale distributed production that may allow for improved insulin access in places with poor availability infrastructure. Access to insulin remains so insufficient around the globe that "half of all people who need insulin lack the financial or logistical means to obtain adequate supplies". [ 8 ] Researcher Frederick Banting famously [ 9 ] refused to put his name on the patent after discovering insulin in 1923. The original patent for insulin was later sold by his collaborators for just $1 to the University of Toronto in an effort to make it as available as possible. [ 10 ] Despite this, for various reasons, [ 11 ] there remains no generic version of insulin available in the US. Insulin remains controlled by a small number of large pharmaceutical companies and sold at prices unaffordable to many who rely on it to live, particularly those without insurance. This lack of availability has led to fatalities, such as Alec Smith, who died in 2017 due to lack of insulin. [ 12 ] The Open Insulin Project is motivated by the urgent need to protect the health of those with diabetes regardless of their economic or employment status by developing low-cost methods for insulin production available for anyone to use. The project has genetically engineered microorganisms to produce long-acting ( glargine ) and short-acting ( lispro ) insulin analogs using standard techniques in biotechnology and according to their December 2018 release the "first major milestone ― the production of insulin at lab scale ― is almost complete". [ 13 ] The cost to produce insulin via Open Insulin methods is estimated by the project to be such that "roughly $10,000 should be enough to get a group started with the equipment needed to produce enough insulin for 10,000 people". [ 1 ] A more recent estimate (May 2020) by the Open Insulin Foundation states that it will cost $200,000 (one-time price, per patient of $7-$20) for used equipment and up to $1,000,000 (one-time price, per patient of $73) for new equipment. The average price per vial was estimated to be $7 with each patient needing two vials per month. [ 14 ]
https://en.wikipedia.org/wiki/Open_Insulin_Project
Open Media Format ( OMF ), Open Media Framework , or Open Media Framework Interchange ( OMFI ), is a platform-independent file format intended for transfer of digital media between different software applications . [ 1 ] OMFI is a file format that aids in exchange of digital media across applications and platforms. This framework enables users to import media elements and to edit information and effects summaries. Sequential media representation is the primary concern that is addressed by this format. [ 2 ] [ 3 ] The primary objective of OMFI is video production. However, there are a number of additional features [ 2 ] which can be listed as follows: Some of the key benefits [ 3 ] of OMFI are: The OMFI format consists of four primary sections namely Header, Object data, Object dictionary and Track data. The header contains an index of all the segments that constitute the file. [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Media_Framework_Interchange
The Open Mobile Terminal Platform ( OMTP ) was a forum created by mobile network operators to discuss standards with manufacturers of mobile phones and other mobile devices. During its lifetime, the OMTP included manufacturers such as Huawei , LG Electronics , Motorola , Nokia , Samsung and Sony Ericsson . [ 1 ] OMTP was originally set up by leading mobile operators. At the time it transitioned into the Wholesale Applications Community at the end of June 2010, there were nine full members: AT&T , Deutsche Telekom AG , KT , Orange , Smart Communications , Telecom Italia , Telefónica , Telenor and Vodafone . OMTP also had the support of two sponsors, Ericsson and Nokia . OMTP recommendations have hugely helped to standardise mobile operator terminal requirements, and its work has gone towards helping to defragment and deoptionalise operators' recommendations. OMTP's focus was on gathering and driving mobile terminal requirements, and publishing their findings in their Recommendations. OMTP was technology neutral, with its recommendations intended for deployment across the range of technology platforms, operating systems (OS) and middleware layers. OMTP is perhaps best known for its work in the field of mobile security, but its work encompassed the full range of mobile device capabilities. OMTP published recommendations in 2007 and early 2008 on areas such as Positioning Enablers, Advanced Device Management, IMS and Mobile VoIP . Later, the Advanced Trusted Environment: OMTP TR1 and its supporting document, 'Security Threats on Embedded Consumer Devices' [ 2 ] were released, with the endorsement of the UK Home Secretary, Jacqui Smith . [ 3 ] OMTP also published requirements document addressing support for advanced SIM cards. This document also defines advanced profiles for Smart Card Web Server, High Speed Protocol, Mobile TV and Contactless. [ 4 ] OMTP has also made significant progress in getting support for the use of micro-USB as a standard connector for data and power. [ 5 ] A full list of their recommendations can be found at GSMA.com. [ 6 ] In 2008, OMTP launched a new initiative called BONDI (named after the Australian beach ); the initiative defined new interfaces ( JavaScript APIs) and a security framework (based on XACML policy description) to enable the access to mobile phone functionalities (Application Invocation, Application Settings, Camera, Communications Log, Gallery, Location, Messaging, Persistent Data, Personal Information, Phone Status, User Interaction) from browser and widget engine securely. The BONDI initiative also had an open source Reference Implementation at bondi.omtp.org . An Approved Release 1.0 of BONDI was issued in June 2009. An open source project for a comprehensive BONDI SDK was started at bondisdk.org. [ 7 ] In February 2009, OMTP expanded its Local Connectivity specification (based on micro-USB ) to describe requirements for a common charger and common connector to enable sharing the same battery charger through different phones. The OMTP Common Charging and Local Data Connectivity [ 8 ] was adopted by GSM Association in the Universal Charging System (UCS) initiative. This has been further endorsed by the CTIA , [ 9 ] and the ITU . [ 10 ] In June 2009 the European Commission reached an agreement with several major mobile phone providers on requirements for a common External Power Supply (EPS) to be compatible with new data-enabled phones sold in the European Union . The EPS shares most of the key attributes of the UCS charger. [ 11 ] [ 12 ] In June 2010, the OMTP transitioned itself into the new Wholesale Applications Community. All OMTP activities ceased at that time and were either taken over within the WAC organisation or other standards or industry associations. [ 13 ] In turn, in July 2012 WAC itself was closed, with the OMTP standards being transferred to GSMA , and other assets and personnel transferring to Apigee . [ 14 ]
https://en.wikipedia.org/wiki/Open_Mobile_Terminal_Platform
The Open Mobile Video Coalition ( OMVC ) is a consortium founded to advance free broadcast mobile television in the United States . It was created by TV stations to promote the ATSC-M/H television standard to consumers, electronics manufacturers, the wireless industry, and the Federal Communications Commission . The OMVC set-up the first real-life beta tests for ATSC-M/H on WATL and WPXA in Atlanta , and on KOMO and KONG in Seattle . Most recently, it has also advocated to the FCC, trying to keep it from taking even more of the UHF upper-band TV channels for wireless broadband. The OMVC commissioned a study to emphasize the fact that broadcasting is a far more efficient use of bandwidth than unicasting the same live video stream hundreds of times to every mobile phone that wants to watch local television. As of January 1, 2013 the OMVC became integrated in the National Association of Broadcasters . [ 1 ] This article about television in the United States is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_Mobile_Video_Coalition
Open Platform Communications ( OPC ) is a series of standards and specifications for industrial telecommunication . They are based on Object Linking and Embedding (OLE) for process control . An industrial automation task force developed the original standard in 1996 under the name OLE for Process Control . OPC specifies the communication of real-time plant data between control devices from different manufacturers. After the initial release in 1996, the OPC Foundation was created to maintain the standards. [ 1 ] Since OPC has been adopted beyond the field of process control, the OPC Foundation changed its name to Open Platform Communications in 2011. [ 1 ] The name change reflects the applications of OPC technology for applications in building automation , discrete manufacturing , process control and others. OPC has also grown beyond its original OLE implementation to include other data transportation technologies including Microsoft Corporation 's .NET Framework , XML , and even the OPC Foundation's binary-encoded TCP format. The OPC specification was based on the OLE , COM , and DCOM technologies developed by Microsoft Corporation for the Microsoft Windows operating system family. The specification defined a standard set of objects , interfaces e.g. IDL and methods for use in process control and manufacturing automation applications to facilitate interoperability . The most common OPC specification is OPC Data Access , which is used for reading and writing real-time data. When vendors refer to "OPC" generically, they typically mean OPC Data Access (OPC DA). OPC DA itself has gone through three major revisions since its inception. Versions are backwards compatible, in that a version 3 OPC Server can still be accessed by a version 1 OPC Client, since the specifications add functionality, but still require the older version to be implemented as well. However, a client could be written that does not support the older functions since everything can be done using the newer ones, thus a DA-3-compatible client will not necessarily work with a DA 1.0 Server. In addition OPC DA specification, the OPC Foundation maintains the OPC Historical Data Access (HDA) specification. In contrast to the real time data that is accessible with OPC DA, OPC HDA allows access and retrieval of archived data. The OPC Alarms and Events specification is maintained by the OPC Foundation, and defines the exchange of alarm and event type message information, as well as variable states and state management. [ 2 ] By 2002, the specification was compared to Fieldbus and other previous standards. [ 3 ] An OPC Express Interface, known as OPC Xi, was approved in November, 2009, for the .NET Framework . [ 4 ] OPC Xi used Windows Communication Foundation instead of DCOM, so it can be configured for communication across the enhanced security of network address translation (NAT). [ 5 ] About the same time, the OPC Unified Architecture (UA) was developed for platform independence. [ 5 ] UA can be implemented with Java , Microsoft .NET , or C , eliminating the need to use a Microsoft Windows platform of earlier OPC versions. UA combined the functionality of the existing OPC interfaces with new technologies such as XML and Web services to deliver higher level manufacturing execution system (MES) and enterprise resource planning (ERP) support. The first working group for UA met in 2003, version 1.0 was published in 2006. [ 6 ] On September 16, 2010, The OPC Foundation and the MTConnect Institute announced cooperation to ensure interoperability and consistency between the two standards. [ 7 ] OPC was designed to provide a common bridge for Windows-based software applications and process control hardware. Standards define consistent methods of accessing field data from plant floor devices. This method remains the same regardless of the type and source of data. An OPC Server for one hardware device provides the same methods for an OPC client to access its data as any other OPC Server for any hardware device. The aim was to reduce the amount of duplicated effort required from hardware manufacturers and their software partners, and from the supervisory control and data acquisition (SCADA) and other human-machine interface (HMI) producers in order to interface the two. Once a hardware manufacturer had developed their OPC Server for the new hardware device, their work was done with regards to allowing any 'top end' to access their device, and once the SCADA producer had developed their OPC client, it allowed access to any hardware with an OPC compliant server. OPC servers provide a method for different software packages (as long as it is an OPC client) to access data from a process control device, such as a programmable logic controller (PLC) or distributed control system (DCS). Traditionally, any time a package needed access to data from a device, a custom interface or driver had to be written. There is nothing in the OPC specifications to restrict the server to providing access to a process control device. OPC Servers can be written for anything from getting the internal temperature of a microprocessor to the current temperature in Monument Valley. [ citation needed ] Once an OPC Server is written for a particular device, it can be reused by any application that is able to act as an OPC client. OPC servers can be linked and communicate to other servers. OPC servers use Microsoft's OLE technology (also known as the Component Object Model, or COM) to communicate with clients. COM technology permits a standard for real-time information exchange between software applications and process hardware to be defined. Some OPC specifications are published, but others are available only to members of the OPC Foundation. So while no company "owns" OPC and anyone can develop an OPC server whether or not they are a member of the OPC Foundation , non-members will not necessarily be using the latest specifications. It is up to each company that requires OPC products to ensure that their products are certified and that their system integrators have the necessary training. [ citation needed ]
https://en.wikipedia.org/wiki/Open_Platform_Communications
The Open Regulatory Annotation Database (also known as ORegAnno ) is designed to promote community-based curation of regulatory information. Specifically, the database contains information about regulatory regions , transcription factor binding sites , regulatory variants, and haplotypes . For each entry, cross-references are maintained to EnsEMBL , dbSNP , Entrez Gene , the NCBI Taxonomy database and PubMed . The information within ORegAnno is regularly mapped and provided as a UCSC Genome Browser track. Furthermore, each entry is associated with its experimental evidence, embedded as an Evidence Ontology within ORegAnno. This allows the researcher to analyze regulatory data using their own conditions as to the suitability of the supporting evidence. The project is open source - all data and all software that is produced in the project can be freely accessed and used. As of December 20, 2006, ORegAnno contained 4220 regulatory sequences (excluding deprecated records) for 2190 transcription factor binding sites, 1853 regulatory regions (enhancers, promoters, etc.), 170 regulatory polymorphisms, and 7 regulatory haplotypes for 17 different organisms (predominantly Drosophila melanogaster , Homo sapiens , Mus musculus , Caenorhabditis elegans , and Rattus norvegicus in that order). These records were obtained by manual curation of 828 publications by 45 ORegAnno users from the gene regulation community. The ORegAnno publication queue contained 4215 publications of which 858 were closed, 34 were in progress (open status), and 3321 were awaiting annotation (pending status). ORegAnno is continually updated and therefore current database contents should be obtained from www.oreganno.org . The RegCreative jamboree was stimulated by a community initiative to curate in perpetuity the genomic sequences which have been experimentally determined to control gene expression. This objective is of fundamental importance to evolutionary analysis and translational research as regulatory mechanisms are widely implicated in species-specific adaptation and the etiology of disease. This initiative culminated in the formation of an international consortium of like-minded scientists dedicated to accomplishing this task. The RegCreative jamboree was the first opportunity for these groups to meet to be able to accurately assess the current state of knowledge in gene regulation and to begin to develop standards by which to curate regulatory information. In total, 44 researchers attended the workshop from 9 different countries and 23 institutions. Funding was also obtained from ENFIN, the BioSapiens Network, FWO Research Foundation, Genome Canada and Genome British Columbia. The specific outcomes of the RegCreative meeting to date are:
https://en.wikipedia.org/wiki/Open_Regulatory_Annotation_Database
Open Source Ecology ( OSE ) is a network of farmers, engineers, architects and supporters, whose main goal is the eventual manufacturing of the Global Village Construction Set ( GVCS ). As described by Open Source Ecology "the GVCS is an open technological platform that allows for the easy fabrication of the 50 types of industrial machines that it takes to build a small civilization with modern comforts". [ 3 ] Groups in Oberlin , Ohio , Pennsylvania , New York and California are developing blueprints, and building prototypes in order to test them on the Factor e Farm in rural Missouri . [ 4 ] [ 5 ] [ 6 ] 3D-Print.com reports [ 7 ] that OSE has been experimenting with RepRap 3-D printers , as suggested by academics for sustainable development . [ 8 ] Marcin Jakubowski founded the group in 2003. [ 9 ] In the final year of his doctoral thesis at the University of Wisconsin , he felt that his work was too closed off from the world's problems, and he wanted to go a different way. After graduation, he devoted himself entirely to OSE. OSE made it to the world stage in 2011 when Jakubowski presented his Global Village Construction Set TED Talk . [ 10 ] Soon, the GVCS won Make magazine's Green Project Contest. The Internet blogs Gizmodo and Grist produced detailed features on OSE. Jakubowski has since become a Shuttleworth Foundation Fellow (2012) and TED Senior Fellow (2012). Open Source Ecology is also developing in Europe as OSE Germany. [ 11 ] This is an independent effort based on OSE's principles. In 2016, OSE and the Open Building Institute [ 12 ] joined forces to make affordable, ecological housing widely accessible. The initiative has prototyped the Seed Eco-Home [ 13 ] – a 1400 square foot home with the help of 50 people in a 5-day period – demonstrating that OSE's Extreme Manufacturing techniques can be applied to rapid swarm builds of large structures. Materials for the Seed Eco-Home cost around US$30,000 in 2016, though the cost went up to approximately US$50,000 in 2022 due to rising lumber prices [ citation needed ] . Further, OBI has prototyped the Aquaponic Greenhouse – which was also built in 5 days with 50 people. The Factor e Farm is the headquarters where the machines are prototyped and tested. The farm also serves as a prototype. Using the Open Source Ecology principles, Four prototype modules have been built as a home. An added greenhouse demonstrates how a family can grow vegetables and fish. Outside, there is also a large garden including fruit trees. [ 14 ] For 2020, OSE was planning its most ambitious collaborative design effort by hosting an Incentive Challenge on the HeroX platform – to produce a professional grade, open source, 3D printed cordless drill that can be manufactured in distributed locations around the world. This project is intended to provide a proof-of-concept for the efficiency of open source development applied to hardware – in addition to its proven success with software. This effort was postponed due to COVID-19, and OSE has pivoted to a product release of the Seed Eco-Home in 2021 to address the need for affordable, ecological housing. [ 15 ] In 2019, OSE updated its vision to collaborative design for a transparent and inclusive economy of abundance. [ 16 ] This reflects a shift from open source to open source and collaborative design . OSE began running its Open Source Microfactory STEAM Camps to emphasize the vision of collaborative design of real products. In 2018, the project achieved 33% completion . In 2014, 12 of the 50 machines were designed, blueprinted, and prototyped, with four of those reaching the documentation stage. [ 17 ] [ 18 ] In October 2011 a Kickstarter fundraising campaign collected US$63,573 for project expenses and the construction of a training facility. [ 19 ] The project has been funded by the Shuttleworth Foundation [ 20 ] and is a semifinalist in the Focus Forward Film Festival. [ 21 ] The Global Village Construction Set (GVCS) comprises 50 industrial machines: [ 25 ] [ 26 ] Compressed earth block press v4 · Concrete mixer · Sawmill · Bulldozer · Backhoe Tractor : LifeTrac v3 · Seeder · Hay rake · Microtractor · Rototiller · Spader · Hay cutter · Trencher · Bakery oven · Dairy milking machine · Micro combine harvester · Baler · Well- drilling rig Multimachine ( milling machine , drill press , and lathe ) · Ironworker · Laser cutter · Welder · Plasma cutter · Induction furnace · CNC torch table · Metal roller · Wire and rod mill · Press forge · Universal rotor · Drill press · 3D printer · 3D scanner · CNC circuit mill · Industrial robot · Woodchipper / Hammermill Power Cube : PowerCube v7 · Gasifier burner · Solar concentrator · Electric motor / generator · Hydraulic motor · Nickel–iron battery · Steam engine · Steam generator · Wind turbine · Pelletizer · Universal power supply Aluminium extractor · Bioplastic extruder Car · Truck The first time a Global Village Construction Set product was created by another group was in October 2011; Jason Smith with James Slade and his organization Creation Flame [ 27 ] developed a functioning open source CEB press. [ 28 ]
https://en.wikipedia.org/wiki/Open_Source_Ecology
Open Threat Exchange (OTX) is a crowd-sourced computer-security platform. [ 1 ] It has more than 180,000 participants in 140 countries who share more than 19 million potential threats daily. [ 2 ] It is free to use. [ 3 ] Founded in 2012, [ 4 ] OTX was created and is run by AlienVault (now AT&T Cybersecurity), a developer of commercial and open source solutions to manage cyber attacks. [ 5 ] The collaborative threat exchange was created partly as a counterweight to criminal hackers successfully working together and sharing information about viruses, malware and other cyber attacks. [ 6 ] OTX is cloud-hosted. Information sharing covers a wide range of issues related to security, including viruses, malware, intrusion detection and firewalls. Its automated tools cleanse, aggregate, validate and publish data shared by participants. [ 4 ] The data is validated by the OTX platform then stripped of information identifying the participating contributor. [ 6 ] In 2015, OTX 2.0 added a social network which enables members to share, discuss and research security threats, including via a real-time threat feed. [ 7 ] Users can share the IP addresses or websites from where attacks originated or look up specific threats to see if anyone has already left such information. [ 8 ] Users can subscribe to a “Pulse,” an analysis of a specific threat, including data on IoC, impact, and the targeted software. Pulses can be exported as STIX, JSON, OpenloC, MAEC and CSV, and can be used to automatically update local security products. [ 7 ] Users can up-vote and comment on specific pulses to assist others in identifying the most important threats. [ 9 ] OTX combines social contributions with automated machine-to-machine tools that integrates with major security products such as firewalls and perimeter security hardware. [ 8 ] The platform can read security report in .pdf, .csv, .json and other open formats. Relevant information is extracted automatically, assisting IT professionals to more readily analyze data. [ 8 ] Specific OTX components include a dashboard with details about the top malicious IPs around the world and to check the status of specific IPs; notifications should an organization's IP or domain be found in a hacker forum, blacklist or be listed by in OTX; and a feature to review log files to determine if there has been communication with known malicious IPs. [ 6 ] In 2016, AlienVault released a new version of OTX allowing participants to create private communities and discussions groups to share information on threats only within the group. The feature is intended to facilitate more in-depth discussions on specific threats, particular industries, and different regions of the world. Threat data from groups can also be distributed to subscribers of managed service providers using OTX." [ 10 ] OTX is a big data platform that integrates natural language processing and machine learning to facilitate the collection and correlation of data from many sources, including third-party threat feeds, websites, external API and local agents. [ 11 ] In 2015, AlienVault partnered with Intel to coordinate real-time threat information on OTX. [ 12 ] A similar deal with Hewlett Packard was announced the same year. [ 1 ] Both Facebook and IBM have threat exchange platforms. The Facebook ThreatExchange is in beta and requires an application or invitation to join. [ 13 ] IBM launched IBM X-Force Exchange in April 2015. [ 14 ]
https://en.wikipedia.org/wiki/Open_Threat_Exchange
The Open Tree of Life is an online phylogenetic tree of life – a collaborative effort, funded by the National Science Foundation . [ 2 ] [ 3 ] The first draft, including 2.3 million species, was released in September 2015. [ 4 ] The Interactive graph allows the user to zoom in to taxonomic classifications, phylogenetic trees, and information about a node. Clicking on a species will return its source and reference taxonomy. The project uses a supertree approach to generate a single phylogenetic tree (served at tree.opentreeoflife.org [ 5 ] ) from a comprehensive taxonomy and a curated set of published phylogenetic estimates. The taxonomy is a combination of several large classifications produced by other projects; it is created using a software tool called "smasher". [ 6 ] The resulting taxonomy is called an Open Tree Taxonomy (OTT) and can be browsed on-line. [ 7 ] The project was started in June 2012 with a three-year NSF award to researchers at ten universities. In 2015, a two-year supplemental award was made to researchers at three institutions.
https://en.wikipedia.org/wiki/Open_Tree_of_Life
Open Virtualization Format ( OVF ) is an open standard for packaging and distributing virtual appliances or, more generally, software to be run in virtual machines . The standard describes an "open, secure, portable, efficient and extensible format for the packaging and distribution of software to be run in virtual machines ". The OVF standard is not tied to any particular hypervisor or instruction set architecture . The unit of packaging and distribution is a so-called OVF Package which may contain one or more virtual systems each of which can be deployed to a virtual machine. In September 2007 VMware , Dell , HP , IBM , Microsoft and XenSource submitted to the Distributed Management Task Force (DMTF) a proposal for OVF, then named "Open Virtual Machine Format". [ 1 ] The DMTF subsequently released the OVF Specification V1.0.0 as a preliminary standard in September, 2008, and V1.1.0 in January, 2010. [ 2 ] In January 2013, DMTF released the second version of the standard, OVF 2.0 which applies to emerging cloud use cases and provides important developments from OVF 1.0 including improved network configuration support and package encryption capabilities for safe delivery. ANSI has ratified OVF 1.1.0 as ANSI standard INCITS 469-2010. [ 3 ] OVF 1.1 was adopted in August 2011 by ISO/IEC JTC 1/SC 38 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as an International Standard ISO/IEC 17203. [ 4 ] OVF 2.0 brings an enhanced set of capabilities to the packaging of virtual machines, making the standard applicable to a broader range of cloud use cases that are emerging as the industry enters the cloud era. The most significant improvements include support for network configuration along with the ability to encrypt the package to ensure safe delivery. [ 5 ] An OVF package consists of several files placed in one directory. An OVF package always contains exactly one OVF descriptor (a file with extension .ovf). The OVF descriptor is an XML file which describes the packaged virtual machine; it contains the metadata for the OVF package, such as name, hardware requirements, references to the other files in the OVF package and human-readable descriptions. In addition to the OVF descriptor, the OVF package will typically contain one or more disk images , and optionally certificate files and other auxiliary files. [ 6 ] The entire directory can be distributed as an Open Virtual Appliance (OVA) package, which is a tar archive file with the OVF directory inside. OVF has generally been broadly accepted. [ 7 ] Several virtualization players in the industry have announced support for OVF. [ 8 ] [ 9 ] [ 10 ] [ 11 ]
https://en.wikipedia.org/wiki/Open_Virtualization_Format
In mathematics , more specifically in topology , an open map is a function between two topological spaces that maps open sets to open sets. [ 1 ] [ 2 ] [ 3 ] That is, a function f : X → Y {\displaystyle f:X\to Y} is open if for any open set U {\displaystyle U} in X , {\displaystyle X,} the image f ( U ) {\displaystyle f(U)} is open in Y . {\displaystyle Y.} Likewise, a closed map is a function that maps closed sets to closed sets. [ 3 ] [ 4 ] A map may be open, closed, both, or neither; [ 5 ] in particular, an open map need not be closed and vice versa. [ 6 ] Open [ 7 ] and closed [ 8 ] maps are not necessarily continuous . [ 4 ] Further, continuity is independent of openness and closedness in the general case and a continuous function may have one, both, or neither property; [ 3 ] this fact remains true even if one restricts oneself to metric spaces. [ 9 ] Although their definitions seem more natural, open and closed maps are much less important than continuous maps. Recall that, by definition, a function f : X → Y {\displaystyle f:X\to Y} is continuous if the preimage of every open set of Y {\displaystyle Y} is open in X . {\displaystyle X.} [ 2 ] (Equivalently, if the preimage of every closed set of Y {\displaystyle Y} is closed in X {\displaystyle X} ). Early study of open maps was pioneered by Simion Stoilow and Gordon Thomas Whyburn . [ 10 ] If S {\displaystyle S} is a subset of a topological space then let S ¯ {\displaystyle {\overline {S}}} and Cl ⁡ S {\displaystyle \operatorname {Cl} S} (resp. Int ⁡ S {\displaystyle \operatorname {Int} S} ) denote the closure (resp. interior ) of S {\displaystyle S} in that space. Let f : X → Y {\displaystyle f:X\to Y} be a function between topological spaces . If S {\displaystyle S} is any set then f ( S ) := { f ( s ) : s ∈ S ∩ domain ⁡ f } {\displaystyle f(S):=\left\{f(s)~:~s\in S\cap \operatorname {domain} f\right\}} is called the image of S {\displaystyle S} under f . {\displaystyle f.} There are two different competing, but closely related, definitions of " open map " that are widely used, where both of these definitions can be summarized as: "it is a map that sends open sets to open sets." The following terminology is sometimes used to distinguish between the two definitions. A map f : X → Y {\displaystyle f:X\to Y} is called a Every strongly open map is a relatively open map. However, these definitions are not equivalent in general. A surjective map is relatively open if and only if it is strongly open; so for this important special case the definitions are equivalent. More generally, a map f : X → Y {\displaystyle f:X\to Y} is relatively open if and only if the surjection f : X → f ( X ) {\displaystyle f:X\to f(X)} is a strongly open map. Because X {\displaystyle X} is always an open subset of X , {\displaystyle X,} the image f ( X ) = Im ⁡ f {\displaystyle f(X)=\operatorname {Im} f} of a strongly open map f : X → Y {\displaystyle f:X\to Y} must be an open subset of its codomain Y . {\displaystyle Y.} In fact, a relatively open map is a strongly open map if and only if its image is an open subset of its codomain. In summary, By using this characterization, it is often straightforward to apply results involving one of these two definitions of "open map" to a situation involving the other definition. The discussion above will also apply to closed maps if each instance of the word "open" is replaced with the word "closed". A map f : X → Y {\displaystyle f:X\to Y} is called an open map or a strongly open map if it satisfies any of the following equivalent conditions: If B {\displaystyle {\mathcal {B}}} is a basis for X {\displaystyle X} then the following can be appended to this list: A map f : X → Y {\displaystyle f:X\to Y} is called a relatively closed map if whenever C {\displaystyle C} is a closed subset of the domain X {\displaystyle X} then f ( C ) {\displaystyle f(C)} is a closed subset of f {\displaystyle f} 's image Im ⁡ f := f ( X ) , {\displaystyle \operatorname {Im} f:=f(X),} where as usual, this set is endowed with the subspace topology induced on it by f {\displaystyle f} 's codomain Y . {\displaystyle Y.} A map f : X → Y {\displaystyle f:X\to Y} is called a closed map or a strongly closed map if it satisfies any of the following equivalent conditions: A surjective map is strongly closed if and only if it is relatively closed. So for this important special case, the two definitions are equivalent. By definition, the map f : X → Y {\displaystyle f:X\to Y} is a relatively closed map if and only if the surjection f : X → Im ⁡ f {\displaystyle f:X\to \operatorname {Im} f} is a strongly closed map. If in the open set definition of " continuous map " (which is the statement: "every preimage of an open set is open"), both instances of the word "open" are replaced with "closed" then the statement of results ("every preimage of a closed set is closed") is equivalent to continuity. This does not happen with the definition of "open map" (which is: "every image of an open set is open") since the statement that results ("every image of a closed set is closed") is the definition of "closed map", which is in general not equivalent to openness. There exist open maps that are not closed and there also exist closed maps that are not open. This difference between open/closed maps and continuous maps is ultimately due to the fact that for any set S , {\displaystyle S,} only f ( X ∖ S ) ⊇ f ( X ) ∖ f ( S ) {\displaystyle f(X\setminus S)\supseteq f(X)\setminus f(S)} is guaranteed in general, whereas for preimages, equality f − 1 ( Y ∖ S ) = f − 1 ( Y ) ∖ f − 1 ( S ) {\displaystyle f^{-1}(Y\setminus S)=f^{-1}(Y)\setminus f^{-1}(S)} always holds. The function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined by f ( x ) = x 2 {\displaystyle f(x)=x^{2}} is continuous, closed, and relatively open, but not (strongly) open. This is because if U = ( a , b ) {\displaystyle U=(a,b)} is any open interval in f {\displaystyle f} 's domain R {\displaystyle \mathbb {R} } that does not contain 0 {\displaystyle 0} then f ( U ) = ( min { a 2 , b 2 } , max { a 2 , b 2 } ) , {\displaystyle f(U)=(\min\{a^{2},b^{2}\},\max\{a^{2},b^{2}\}),} where this open interval is an open subset of both R {\displaystyle \mathbb {R} } and Im ⁡ f := f ( R ) = [ 0 , ∞ ) . {\displaystyle \operatorname {Im} f:=f(\mathbb {R} )=[0,\infty ).} However, if U = ( a , b ) {\displaystyle U=(a,b)} is any open interval in R {\displaystyle \mathbb {R} } that contains 0 {\displaystyle 0} then f ( U ) = [ 0 , max { a 2 , b 2 } ) , {\displaystyle f(U)=[0,\max\{a^{2},b^{2}\}),} which is not an open subset of f {\displaystyle f} 's codomain R {\displaystyle \mathbb {R} } but is an open subset of Im ⁡ f = [ 0 , ∞ ) . {\displaystyle \operatorname {Im} f=[0,\infty ).} Because the set of all open intervals in R {\displaystyle \mathbb {R} } is a basis for the Euclidean topology on R , {\displaystyle \mathbb {R} ,} this shows that f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is relatively open but not (strongly) open. If Y {\displaystyle Y} has the discrete topology (that is, all subsets are open and closed) then every function f : X → Y {\displaystyle f:X\to Y} is both open and closed (but not necessarily continuous). For example, the floor function from R {\displaystyle \mathbb {R} } to Z {\displaystyle \mathbb {Z} } is open and closed, but not continuous. This example shows that the image of a connected space under an open or closed map need not be connected. Whenever we have a product of topological spaces X = ∏ X i , {\textstyle X=\prod X_{i},} the natural projections p i : X → X i {\displaystyle p_{i}:X\to X_{i}} are open [ 12 ] [ 13 ] (as well as continuous). Since the projections of fiber bundles and covering maps are locally natural projections of products, these are also open maps. Projections need not be closed however. Consider for instance the projection p 1 : R 2 → R {\displaystyle p_{1}:\mathbb {R} ^{2}\to \mathbb {R} } on the first component; then the set A = { ( x , 1 / x ) : x ≠ 0 } {\displaystyle A=\{(x,1/x):x\neq 0\}} is closed in R 2 , {\displaystyle \mathbb {R} ^{2},} but p 1 ( A ) = R ∖ { 0 } {\displaystyle p_{1}(A)=\mathbb {R} \setminus \{0\}} is not closed in R . {\displaystyle \mathbb {R} .} However, for a compact space Y , {\displaystyle Y,} the projection X × Y → X {\displaystyle X\times Y\to X} is closed. This is essentially the tube lemma . To every point on the unit circle we can associate the angle of the positive x {\displaystyle x} -axis with the ray connecting the point with the origin. This function from the unit circle to the half-open interval [0,2π) is bijective, open, and closed, but not continuous. It shows that the image of a compact space under an open or closed map need not be compact. Also note that if we consider this as a function from the unit circle to the real numbers, then it is neither open nor closed. Specifying the codomain is essential. Every homeomorphism is open, closed, and continuous. In fact, a bijective continuous map is a homeomorphism if and only if it is open, or equivalently, if and only if it is closed. The composition of two (strongly) open maps is an open map and the composition of two (strongly) closed maps is a closed map. [ 14 ] [ 15 ] However, the composition of two relatively open maps need not be relatively open and similarly, the composition of two relatively closed maps need not be relatively closed. If f : X → Y {\displaystyle f:X\to Y} is strongly open (respectively, strongly closed) and g : Y → Z {\displaystyle g:Y\to Z} is relatively open (respectively, relatively closed) then g ∘ f : X → Z {\displaystyle g\circ f:X\to Z} is relatively open (respectively, relatively closed). Let f : X → Y {\displaystyle f:X\to Y} be a map. Given any subset T ⊆ Y , {\displaystyle T\subseteq Y,} if f : X → Y {\displaystyle f:X\to Y} is a relatively open (respectively, relatively closed, strongly open, strongly closed, continuous, surjective ) map then the same is true of its restriction f | f − 1 ( T ) : f − 1 ( T ) → T {\displaystyle f{\big \vert }_{f^{-1}(T)}~:~f^{-1}(T)\to T} to the f {\displaystyle f} -saturated subset f − 1 ( T ) . {\displaystyle f^{-1}(T).} The categorical sum of two open maps is open, or of two closed maps is closed. [ 15 ] The categorical product of two open maps is open, however, the categorical product of two closed maps need not be closed. [ 14 ] [ 15 ] A bijective map is open if and only if it is closed. The inverse of a bijective continuous map is a bijective open/closed map (and vice versa). A surjective open map is not necessarily a closed map, and likewise, a surjective closed map is not necessarily an open map. All local homeomorphisms , including all coordinate charts on manifolds and all covering maps , are open maps. Closed map lemma — Every continuous function f : X → Y {\displaystyle f:X\to Y} from a compact space X {\displaystyle X} to a Hausdorff space Y {\displaystyle Y} is closed and proper (meaning that preimages of compact sets are compact). A variant of the closed map lemma states that if a continuous function between locally compact Hausdorff spaces is proper then it is also closed. In complex analysis , the identically named open mapping theorem states that every non-constant holomorphic function defined on a connected open subset of the complex plane is an open map. The invariance of domain theorem states that a continuous and locally injective function between two n {\displaystyle n} -dimensional topological manifolds must be open. Invariance of domain — If U {\displaystyle U} is an open subset of R n {\displaystyle \mathbb {R} ^{n}} and f : U → R n {\displaystyle f:U\to \mathbb {R} ^{n}} is an injective continuous map , then V := f ( U ) {\displaystyle V:=f(U)} is open in R n {\displaystyle \mathbb {R} ^{n}} and f {\displaystyle f} is a homeomorphism between U {\displaystyle U} and V . {\displaystyle V.} In functional analysis , the open mapping theorem states that every surjective continuous linear operator between Banach spaces is an open map. This theorem has been generalized to topological vector spaces beyond just Banach spaces. A surjective map f : X → Y {\displaystyle f:X\to Y} is called an almost open map if for every y ∈ Y {\displaystyle y\in Y} there exists some x ∈ f − 1 ( y ) {\displaystyle x\in f^{-1}(y)} such that x {\displaystyle x} is a point of openness for f , {\displaystyle f,} which by definition means that for every open neighborhood U {\displaystyle U} of x , {\displaystyle x,} f ( U ) {\displaystyle f(U)} is a neighborhood of f ( x ) {\displaystyle f(x)} in Y {\displaystyle Y} (note that the neighborhood f ( U ) {\displaystyle f(U)} is not required to be an open neighborhood). Every surjective open map is an almost open map but in general, the converse is not necessarily true. If a surjection f : ( X , τ ) → ( Y , σ ) {\displaystyle f:(X,\tau )\to (Y,\sigma )} is an almost open map then it will be an open map if it satisfies the following condition (a condition that does not depend in any way on Y {\displaystyle Y} 's topology σ {\displaystyle \sigma } ): If the map is continuous then the above condition is also necessary for the map to be open. That is, if f : X → Y {\displaystyle f:X\to Y} is a continuous surjection then it is an open map if and only if it is almost open and it satisfies the above condition. If f : X → Y {\displaystyle f:X\to Y} is a continuous map that is also open or closed then: In the first two cases, being open or closed is merely a sufficient condition for the conclusion that follows. In the third case, it is necessary as well. If f : X → Y {\displaystyle f:X\to Y} is a continuous (strongly) open map, A ⊆ X , {\displaystyle A\subseteq X,} and S ⊆ Y , {\displaystyle S\subseteq Y,} then:
https://en.wikipedia.org/wiki/Open_and_closed_maps
In mathematics , an open book decomposition (or simply an open book ) is a decomposition of a closed oriented 3-manifold M into a union of surfaces (necessarily with boundary) and solid tori . Open books have relevance to contact geometry , with a famous theorem of Emmanuel Giroux (given below) that shows that contact geometry can be studied from an entirely topological viewpoint. Definition. An open book decomposition of a 3-dimensional manifold M is a pair ( B , π) where This is the special case m = 3 of an open book decomposition of an m -dimensional manifold, for any m . The definition for general m is similar, except that the surface with boundary (Σ, B) is replaced by an ( m − 1)-manifold with boundary ( P , ∂ P ). Equivalently, the open book decomposition can be thought of as a homeomorphism of M to the quotient space P × [ 0 , 1 ] / ( x , t ) ∼ ( x , s ) for x ∈ ∂ P and ( x , 0 ) ∼ ( f ( x ) , 1 ) for x ∈ P {\displaystyle P\times [0,1]/(x,t)\sim (x,s){\text{ for }}x\in \partial P{\text{ and }}(x,0)\sim (f(x),1){\text{ for }}x\in P} where f : P → P is a self-homeomorphism preserving the boundary. This quotient space is called a relative mapping torus . [ 1 ] When Σ is an oriented compact surface with n boundary components and φ: Σ → Σ is a homeomorphism which is the identity near the boundary, we can construct an open book by first forming the mapping torus Σ φ . Since φ is the identity on ∂Σ, ∂Σ φ is the trivial circle bundle over a union of circles, that is, a union of tori; one torus for each boundary component. To complete the construction, solid tori are glued to fill in the boundary tori so that each circle S 1 × { p } ⊂ S 1 ×∂ D 2 is identified with the boundary of a page. In this case, the binding is the collection of n cores S 1 ×{q} of the n solid tori glued into the mapping torus, for arbitrarily chosen q ∈ D 2 . It is known that any open book can be constructed this way. As the only information used in the construction is the surface and the homeomorphism, an alternate definition of open book is simply the pair (Σ, φ) with the construction understood. In short, an open book is a mapping torus with solid tori glued in so that the core circle of each torus runs parallel to the boundary of the fiber. Each torus in ∂Σ φ is fibered by circles parallel to the binding, each circle a boundary component of a page. One envisions a rolodex -looking structure for a neighborhood of the binding (that is, the solid torus glued to ∂Σ φ )—the pages of the rolodex connect to pages of the open book and the center of the rolodex is the binding. Thus the term open book . It is a 1972 theorem of Elmar Winkelnkemper that for m > 6, a simply-connected m -dimensional manifold has an open book decomposition if and only if it has signature 0. In 1977 Terry Lawson proved that for odd m > 6, every m -dimensional manifold has an open book decomposition, a result extended to 5-manifolds and manifolds with boundary by Frank Quinn in 1979. Quinn also showed that for even m > 6, an m -dimensional manifold has an open book decomposition if and only if an asymmetric Witt group obstruction is 0. [ 1 ] In 2002, Emmanuel Giroux published the following result: Theorem. Let M be a compact oriented 3-manifold. Then there is a bijection between the set of oriented contact structures on M up to isotopy and the set of open book decompositions of M up to positive stabilization. Positive stabilization consists of modifying the page by adding a 2-dimensional 1-handle and modifying the monodromy by adding a positive Dehn twist along a curve that runs over that handle exactly once. Implicit in this theorem is that the new open book defines the same contact 3-manifold. Giroux's result has led to some breakthroughs in what is becoming more commonly called contact topology , such as the classification of contact structures on certain classes of 3-manifolds. Roughly speaking, a contact structure corresponds to an open book if, away from the binding, the contact distribution is isotopic to the tangent spaces of the pages through confoliations . One imagines smoothing the contact planes (preserving the contact condition almost everywhere) to lie tangent to the pages.
https://en.wikipedia.org/wiki/Open_book_decomposition
In R&D management and systems development , open coopetition or open-coopetition is a neologism to describe cooperation among competitors in the open-source arena. The term was first coined by the scholars Jose Teixeira and Tingting Lin to describe how rival firms that, while competing with similar products in the same markets, cooperate which each other in the development of open-source projects (e.g., Apple , Samsung , Google , Nokia ) in the co-development of WebKit . [ 1 ] More recently, open coopetition started also being used also to refer to strategic approaches where competing organizations collaborate on open innovation initiatives while maintaining their competitive market positions. [ 2 ] Open-coopetition is a compound-word term bridging coopetition and open-source . Coopetition refers to a paradoxical relationship between two or more actors simultaneously involved in cooperative and competitive interactions; [ 3 ] [ 4 ] and open-source both as a development method that emphasizes transparency and collaboration, and as a "private-collective" innovation model with features both from the private investment and collective action [ 5 ] — firms contribute towards the creation of public goods while giving up associated intellectual property rights such patents, copyright, licenses, or trade secrets. By exploring coopetition in the particular context of open-source , Open-coopetition emphasizes transparency on the co-development of technological artifacts that become available to the public under an open-source license —allowing anyone to freely obtain, study, modify and redistribute them. Within open-coopetition , development transparency and sense of community are maximized; while the managerial control and IP enforcement are minimized. Open-coopetitive relationships are paradoxical as the core managerial concepts of property, contract and price play an outlier role. The openness characteristic of open-source projects also distinguishes open-coopetition from other forms of cooperative arrangements by its inclusiveness: Everybody can contribute. Users or other contributors do not need to hold a supplier contract or sign a legal intellectual property arrangement to contribute. Moreover, neither to be a member of a particular firm or affiliated with a particular joint venture or consortia to be able to contribute. In the words of Massimo Banzi , "You don't need anyone's permission to make something great". [ 6 ] More recently open-coopetition is used to describe open-innovation among competitors more broadly with many cases out of the software industry . [ 2 ] [ 7 ] [ 8 ] While some authors use open-coopetition to emphasize the production of open-source software among competitors, others use open-coopetition to emphasis open-innovation among competitors. In a large-scale study involving multiple European-based software intensive firms, the scholars Pär Ågerfalk and Brian Fitzgerald revealed a shift from " open-source as a community of individual developers to open-source as a community of commercial organizations, primarily small and medium-sized enterprises, operating as a symbiotic ecosystem in a spirit of coopetition ". [ 9 ] Even if they were exploring open-sourcing as "a novel and unconventional approach to global sourcing and coopetition ", they captured the following quote that highlights that competition in the open-source arena is not as in business as usual. "In a traditional market you don't call up your competitor and be like, oh, well tell me what your stuff does. But in open source you do." [Open Source Program Director, at IONA ] [ 9 ] Also in the academic world, and after following a software company based in Norway for over five years, and while theorizing on the concept of software ecosystem , the academic Geir K. Hanssen noted that the characteristic networks of a software ecosystem , open-source or proprietary ones, can embed competing organizations. "Software ecosystems have a networked character. CSoft and its external environment constitute a network of customers and third party organizations. Even competitors may be considered a part of this network, although this aspect has not been studied in particular here." [ 10 ] In an opinion article entitled Open Source Coopetition Fueled by Linux Foundation Growth, the journalist and market analyst Jay Lyman highlights that "working with direct rivals may have been unthinkable 10 years ago, but Linux , open-source and organizations such as the Linux Foundation have highlighted how solving common problems and easing customer pain and friction in using and choosing different technologies can truly drive innovation and traction in the market." [ 11 ] The term "open source coopetition" was employed to highlight the role of the Linux Foundation as a mediator of collaboration among rival firms. At the OpenStack summit in Hong Kong , the co-founder of Mirantis Boris Renski talked about his job on figuring out how to co-opete in the crowded OpenStack open-source community. In a 43-minute broadcast video, Boris Renski shed some light on OpenStack coopetition politics and shared a subjective view on strategies of individual players within the OpenStack community (e.g., Rackspace , Mirantis , IBM , HP and Red Hat among others). [ 12 ] The Mirantis co-founder provided a rich description of an open-source community working in co-opetition. Along with this lines, the pioneering scholarly work of Germonprez et al. (2013) [ 13 ] reported on how key business actors within the financial services industry that traditionally viewed open-source software with skepticism, tied up an open-source ‘community of competitors’. By taking the case of OpenMAMA , a Middleware Agnostic Messaging API used by some of the world's largest financial players, they show that corporate market rivals (e.g., J. P. Morgan , Bank of America , IBM and BMC ) can coexist in open-source communities , and intentionally coordinate activities or mutual benefits in precise, market focused, and non-differentiating engagements. Their work pointed out that high-competitive capital-oriented industries do not epitomize the traditional and grassroots idea that open-source software was originally born from. Furthermore, they argued that open-source communities can be deliberately designed to include competing vendors and customers under neutral institutional structures (e.g., foundations and steering committees). In an academic paper entitled "Collaboration in the open-source arena: The WebKit case", the scholars Jose Teixeira and Tingting Lin executed an ethnographic informed social network analysis on the development of the WebKit open-source web browsing technologies. Among a set of the reported findings, they pointed out that even if Apple and Samsung were involved in expensive patent wars in the courts at the time, they still collaborated in the open-source arena. As some of the research results did not confirm prior research in coopetition , [ 3 ] [ 4 ] the authors proposed and coined the "open-coopetition" term while emphasizing the openness of collaborating with competitors in the open-source arena. [ 1 ] By turning to OpenStack , the scholars Teixeira et al. (2015) [ 14 ] went further and modeled and analyzed both collaborative and competitive networks from the OpenStack open-source project (a large and complex cloud computing infrastructure for big data ). Somewhat surprising results point out that competition for the same revenue model (i.e., operating conflicting business models) does not necessarily affect collaboration within the OpenStack ecosystem —in other words, competition among firms did not significantly influence collaboration among software developers affiliated with them. Furthermore, the expected social tendency of developers to work with developers from same firm (i.e., homophily ) did not hold within the OpenStack ecosystem . The case of OpenStack revealed to be much about genuine collaboration in software development besides ubiquitous competition among the firms that produce and use the software. A related study by Linåker et al. (2016) [ 15 ] analyzed the Apache Hadoop ecosystem in a quantitative longitudinal case study to investigate changing stakeholder influence and collaboration patterns. They found that the collaborative network had a quite stable number of network components (i.e., number of sub-communities within the community) with many unconnected stakeholders. Furthermore, such components were dominated by a core set of stakeholders that engaged in most of the collaborative relationships. As in OpenStack , there was much cooperation among competing and non-competing actors within the Apache Hadoop ecosystem—or in other words, firms with competing business models collaborate as openly as non-rivaling firms. Finally, they also argued that the openness of software ecosystems decreases the distance to competitors within the same ecosystem, it becomes possible and important to track what the competitors do within. Knowing about their existing collaborations, contributions, and interests in specific features offer valuable information about the competitors’ strategies and tactics. In a study addressing coopetition in the cloud computing industry, Teixeira et al. [ 16 ] analyzed not only coopetition among individuals and organizations but also among cohesive inter-organizational networks . Relationships among individuals were modeled and visualized in 2D longitudinal visualizations and relationships among inter-organizational networks (e.g., alliances, consortium or ecosystem) were modeled and visualized in 3D longitudinal visualizations. The author added evidence to prior research [ 4 ] suggesting that competition is a multi-level phenomenon that is influenced by individual-level, organizational-level, and network-level factors. By noting that many firms engaging into open-coopetition actively manage multiple portfolios of alliances in the software industry (i.e., many strategically contribute to multiple open-source software ecosystems ) and by analyzing the co-evolution of OpenStack and the CloudStack cloud computing platforms, the same authors propose that development transparency and the weak intellectual property rights, two well-known characteristics of open-source ecosystems , allow an easier transfer of information and resources from one alliance to another. Even if openness enables a focal firm to transfer information and resources more easily between multiple alliances , such 'ease of transfer' should not be seen as a source of competitive advantage as competitors can do the same. In a study explicitly addressing coopetition in open-source software ecosystems , Nguyen Duc et al. (2017) [ 17 ] identified a number of situations in which different actors within the software ecosystem deal with collaborative-competitive issues: Competitive behavior within open-source software ecosystems frictions with the more purist view of free and open-source software . The same authors reported on some working practices that conflict with the more traditional values of free and open-source software . The same study also unfolded a number of benefits that organization can rip by actively contributing to open-source software ecosystems that encompass both cooperative and competitive relationships: In the last chapter of book dedicated to coopetition strategies, scholars Frédéric Le Roy and Henry Chesbrough developed the concept of open-coopetition by combining insights from both the open innovation and coopetition literatures. [ 7 ] They departed from open-coopetition in the specific realm of open-source software to the more broader context of open innovation among competitors. Their work defines open-coopetition as "open innovation between competitors including collaboration", outline key success factors of open innovation based on collaboration with a competitors, and calls for further research on the topic. [ 7 ] While proposing a research agenda for open-coopetion, Roth et al. (2019) argued that there is no need to narrow the concept of open coopetition to the software industry. [ 8 ] More broadly, they redefined the concept as "simultaneously collaborative and competitive open innovation between competitors and third parties such as networks, platforms, communities or ecosystems" . [ 8 ] Furthermore, they also argued that open-coopetition not only takes place in a growing number of industries but also constitutes both a management challenge at the individual or inter-firm level and as an organizing principle of many regional or national innovation systems. [ 8 ] While prior work explored open-coopetition among individuals, firms, platforms and ecosystems, Roth et al. (2019) discussed open-coopetition among Public–private partnerships and the Triple helix model of innovation that relies on the interactions refers to a set of interactions between academia (the university), industry and government. An editorial review of a special issue on "coopetition strategies" pointed out the popularity of the open-coopetition strategy among firms. The scholars pinpointed that from a strategic management perspective "it seems very important to know why, how and for which outcomes they follow this kind of strategy". [ 18 ] A Finnish policy white paper entitled "From Industry X to Industry 6.0" by Business Finland , pointed out that "open-coopetiton" requires new mindsets, changes in operating methods, and new orchestration needs. This is perhaps the first time that the term "open-coopetition" is referred within a public policy document. [ 19 ] A doctoral dissertation entitled "Innovating Innovation Management in the Medical Device SME Sector Through Coopetition" defended by Dirk Dembski on August 2022, at Sheffield Hallam University, UK, explored the applicability of open-source coopetition in the highly regulated medical devices european market characterized by intense and evolving regulations. Although the term "open-coopetition" is not explicitly used, some of the presented qualitative evidence probed the applicability of open-source in the european medical device industry. On the one hand, these SMEs recognize the value that open-source model could bring to work with others (including competitors). On the other hand, they are "unwilling" or "not ready" to share their algorithms. [ 20 ] Empirical work investigating open-coopetition in the automotive industry by Jose Teixeira suggested that cooperating with competitors in the open-source arena is not only about saving money but also about saving time. "they jump over the overheads costs related to sourcing software in the traditional way that can involve long negotiations, formulation of legal contracts, and stipulation of diverse intellectual property arrangements (e.g., patents and copyrights issues, distribution and end-user licensing agreements as well as non-disclosure agreements)." [ 21 ] The same author also pointed out the practical benefits of open-source software and reduction of duplication efforts in both production and maintenance of software. Furthermore, the inclusiveness and openness of open-source software projects encourages contributions from enthusiasts, students, hackers, and academics. The same author also suggested industrial convergence and increased competition as antecedents of open-coopetition. "As software becomes increasingly important, auto-makers fear the convergence with the software industry. In an era where cars are 1) increasingly powered by software in general and open-source software in particular, 2) connect to other mobile products such as smart-phones and tablets, and 3) integrate with a number of digital services (e.g., navigation, assistance, and entertainment), new entrants in the automobile industry, especially software-savvy organizations such as Apple and Google, can challenge the established players." [ 21 ] Ghislain de Vergnette, director of Product and Marketing at GOODSID, announced a new product that aimed “to integrate a variety of solutions, whether complementary or competing, in a spirit of open coopetition.” - this is perhaps one of the first times that the open coopetition term is used within the context of startups. The launching of a new product in the “spirit of open coopetition” emphasised the openness of their technology and their openness to the integration of their product with competing others [ 22 ] Cases of open coopetition are common in the software industry in general. Some cases also occur in the electronics , semiconductors , automotive , financial , telecommunications , retail , education , healthcare , defense , aerospace , and additive manufacturing industries. Cases of open coopetition are often associated with high-tech corporations and startups based in the USA (mostly on the West Coast ). Cases can be also recognized in Cuba, Brazil, Europe (predominantly on Western Europe ), India, South-Korea, China, Vietnam, Australia, and Japan. Many of the software projects encompassing open coopetition are legally governed by foundations such as the Linux Foundation , the Free Software Foundation , the Apache Software Foundation , the Eclipse Foundation , the Cloud Native Computing Foundation , and the X.Org Foundation among many others. Most of the Linux Foundation collaborative projects are coopetitive in nature: the Linux Foundation claims to be "a neutral home for collaborative development". [ 23 ] Furthermore, many coopetitive open-source projects dealing with both software and hardware (e.g., computer graphics , data storage ) are bounded by standard organizations such as the Khronos Group , W3C and the Open Compute Project .
https://en.wikipedia.org/wiki/Open_coopetition
Open flow microperfusion ( OFM ) is a sampling method for clinical and preclinical drug development studies and biomarker research. OFM is designed for continuous sampling of analytes from the interstitial fluid (ISF) of various tissues. It provides direct access to the ISF by insertion of a small, minimally invasive, membrane-free probe with macroscopic openings. [ 1 ] Thus, the entire biochemical information of the ISF becomes accessible regardless of the analyte's molecular size, protein-binding property or lipophilicity . [ citation needed ] OFM is capable of sampling lipophilic and hydrophilic compounds, [ 2 ] protein bound and unbound drugs, [ 3 ] [ 4 ] neurotransmitters , peptides and proteins , antibodies , [ 5 ] [ 6 ] [ 7 ] nanoparticles and nanocarriers , enzymes and vesicles . The OFM probes are perfused with a physiological solution (the perfusate) which equilibrates with the ISF of the surrounding tissue. Operating flow rates range from 0.1 to 10 μL/min. OFM allows unrestricted exchange of compounds via an open structure across the open exchange area of the probe. This exchange of compounds between the probe’s perfusate and the surrounding ISF is driven by convection and diffusion, and occurs non-selectively in either direction (Figure 1). The direct liquid pathway between the probe’s perfusate and the surrounding fluid results in collection of ISF samples. These samples can be collected frequently and are then subjected to bioanalytical analysis to enable monitoring of substance concentrations with temporal resolution during the whole sampling period. [ 8 ] [ 9 ] The concentric OFM probe (Figure 2) works according to the same principle. The perfusate is pumped to the tip of the OFM probe through the inner, thin lumen and exits beyond the Open Exchange Area, where it then mixes with exogenous substances present in the ISF before being withdrawn through the outer, thick lumen. [ citation needed ] The first OFM sampling probe to be used as an alternative to microdialysis was described in an Austrian patent application filed by Falko Skrabal in 1987, where OFM was described as a device, which can be implanted into the tissue of living organisms. [ 10 ] In 1992, a US patent was filed claiming a device for determining at least one medical variable in the tissue of living organisms. [ 11 ] In a later patent by Helmut Masoner, Falko Skrabal and Helmut List a linear type of the sampling probe with macroscopic circular holes was also disclosed. [ 12 ] Alternative and current OFM versions for dermal and adipose tissue application were developed by Joanneum Research , and were patented by Manfred Bodenlenz et al. [ 13 ] [ 14 ] Alternative materials featuring low absorption were used to enable manufacturing of probes with diameters of 0.55 mm and exchange areas of 15 mm in length. For cerebral application, special OFM probes were patented by Birngruber et al. [ 15 ] Additionally, a patent was filed to manage the fluid handling of the ISF by using a portable peristaltic pump with a flow range of 0.1 to 10 μL/min that enables operation of up to three probes per pump. [ 16 ] Two types of OFM probes are currently available: Linear OFM probes for implantation into superficial tissues such as skin (dermal OFM, dOFM) and subcutaneous adipose tissue (adipose OFM, aOFM) as well as concentric probes for implantation into various regions of the brain (cerebral OFM, cOFM). [ citation needed ] OFM is routinely applied in pharmaceutical research in preclinical (e.g. mice, rats, pigs, primates) and in clinical studies in humans (Figure 3). OFM-related procedures such as probe insertions or prolonged sampling with numerous probes are well tolerated by the subjects. [ 1 ] dOFM (Figure 4) allows the investigation of transport of drugs in the dermis and their penetration into the dermis after local, topical or systemic application, and dOFM is mentioned by the U.S. Food and Drug Administration as a new method for assessment of bioequivalence of topical drugs. [ 17 ] [ 18 ] [ 19 ] dOFM is used for: Head-to-head settings with OFM have proven particularly useful for the evaluation of topical generic products, which need to demonstrate bioequivalence [ 9 ] to the reference listed drug product to obtain market approval. Applications of dOFM include ex vivo studies with tissue explants and preclinical and clinical in vivo studies. aOFM (Figure 4) allows continuous on-line monitoring of metabolic processes in the subcutaneous adipose tissue, e.g. glucose and lactate , [ 21 ] [ 22 ] [ 23 ] as well as larger analytes such as insulin (5.9 kDa). [ 24 ] [ 23 ] The role of polypeptides for metabolic signaling ( leptin , cytokine IL-6, TNFα) has also been studied with aOFM. [ 25 ] aOFM allows the quantification of proteins (e.g. albumin size: 68 kDa) in adipose tissue [ 4 ] and thus opens up the possibility to investigate protein-bound drugs directly in peripheral target tissues, such as highly protein-bound insulin analogues designed for a prolonged, retarded insulin action. [ 26 ] Most recently, aOFM has been used to sample agonists to study obesity , lipid metabolism and immune-inflammation. Applications of aOFM include ex vivo studies with tissue explants and preclinical and clinical in vivo studies. [ citation needed ] cOFM (Figure 5) is used to conduct PK/PD preclinical studies in the animal brain. Access to the brain includes monitoring of the blood-brain barrier function and drug transport across the intact blood-brain barrier. [ 27 ] cOFM allows taking a look behind the blood-brain barrier and assesses concentrations and effects of neuroactive substances directly in the targeted brain tissue. [ 28 ] The blood-brain barrier is a natural shield that protects the brain and limits the exchange of nutrients , metabolites and chemical messengers between blood and brain. The blood-brain barrier also prevents potential harmful substances from entering and damaging the brain. However, this highly effective barrier also prevents neuroactive substances from reaching appropriate targets. For researchers that develop neuroactive drugs, it is therefore of major interest to know whether and to what extent an active pharmaceutical component can pass the blood-brain barrier. Experiments have shown that the blood-brain barrier has fully reestablished 15 days after implantation of the cOFM probe in the brain of rats. [ 29 ] The cOFM probe has been specially designed to avoid a reopening of the blood-brain barrier or causing additional trauma to the brain after implantation. cOFM enables continuous sampling of cerebral ISF with intact blood-brain barrier cOFM and thus allows continuous PK monitoring in brain tissue. [ citation needed ] ISF compounds can be quantified either indirectly from merely diluted ISF samples by using OFM and additional calibration techniques, or directly from undiluted ISF samples which can be collected with additional OFM methods. Quantification of compounds from diluted ISF samples requires additional application of calibration methods, such as Zero Flow Rate, [ 30 ] No Net Flux [ 31 ] or Ionic Reference. [ 32 ] Zero Flow Rate has been used in combination with dOFM by Schaupp et al. [ 3 ] to quantify potassium , sodium and glucose in adipose ISF samples. No Net Flux has been applied to quantify several analytes in OFM studies in subcutaneous adipose, muscle and dermal ISF: the absolute lactate concentrations [ 33 ] and the absolute glucose concentrations in adipose ISF, [ 3 ] the absolute albumin concentration in muscle ISF [ 4 ] and the absolute insulin concentration in adipose and muscle ISF have been successfully determined. [ 34 ] Dragatin et al. [ 5 ] used No Net Flux in combination with dOFM to assess the absolute ISF concentration of a fully human therapeutic antibody. Ionic Reference has been used in combination with OFM to assess the absolute glucose concentration [ 3 ] and the absolute lactate concentration in adipose ISF. [ 33 ] Dermal OFM has also been used to quantify the concentrations of human insulin and an insulin analogue in the ISF with inulin as exogenous marker. [ citation needed ] Additional OFM methods, such as OFM recirculation and OFM suction can collect undiluted ISF samples from which direct and absolute quantification of compounds is feasible. [ 35 ] OFM recirculation to collect undiluted ISF samples recirculates the perfusate in a closed loop until equilibrium concentrations between perfusate and ISF are established. Using albumin as analyte, 20 recirculation cycles have been enough to reach equilibrium ISF concentrations. OFM suction is performed by applying a mild vacuum, which pulls ISF from the tissue into the OFM probe. [ citation needed ]
https://en.wikipedia.org/wiki/Open_flow_microperfusion
In computing , open implementation platforms are systems where the implementation is accessible. Open implementation allows developers of a program to alter pieces of the underlying software to fit their specific needs. With this technique, it is far easier to write general tools, though it makes the programs themselves more complex to design and use. There are also open language implementations , which make aspects of the language implementation accessible to application programmers. Open implementation is not to be confused with open source , which allows users to change implementation source code, rather than using existing application programming interfaces . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_implementation
Open justice is a legal principle that requires that judicial proceedings be conducted in a transparent manner and with the oversight of the people, so as to safeguard the rights of those subject to the power of the court and to allow for the scrutiny of the public in general. The term has particular emphasis in legal systems based on British law , such as in the United Kingdom , Commonwealth countries such as South Africa and Canada and Australia , and former British colonies such as the United States . The term has several closely related meanings: it is seen as a fundamental right guaranteeing liberty; [ 1 ] [ 2 ] it describes guidelines for how courts can be more transparent; [ 3 ] and it sometimes identifies an ideal situation. [ 3 ] [ 4 ] In a courtroom, it means steps to promote transparency such as letting the public see and hear trials as they happen in real time, televising trials as they happen, videotaping proceedings for later viewing, publishing the content and documents of court files, providing transcripts of statements, making past decisions available for review in an easy-to-access format, [ 5 ] [ 6 ] publishing decisions, and giving reporters full access to files and participants so they can report what happens. The principle includes efforts to try to make what happens in the court understandable to the public and the press. [ 7 ] In Canada, open justice is referred to as the open court principle . The principle is viewed as an underlying or core principle in British law. [ 5 ] It has a long history dating back hundreds of years, and it has been traced to decisions made before the signing of Magna Carta in 1215. [ 1 ] [ 5 ] [ 8 ] Today the concept is so widely accepted that there is a general presumption that there should be judicial openness, such that openness is the rule, with secret or obscured proceedings being considered as exceptions needing to be justified. [ 7 ] The rise of social media websites such as Facebook has opened new ways for court cases to be made public; for example, in Australia , courts have considered having websites with live videos as well as blogs by retired judges to "preserve the concepts of open justice" in the digital age. [ 9 ] In recent years, when governments try to cope with thorny problems such as terrorism , there are concerns that the principle of open justice can be undermined relatively easily by national security concerns. [ 10 ] There are concerns that if new secrecy guidelines harden into precedents, that it might be hard to restore the "centuries old system of open justice". [ 10 ] Proponents of open justice assert numerous benefits. An overarching benefit is that it keeps courts behaving properly. [ 5 ] Openness acts as a safeguard for the proper administration of justice. [ 8 ] According to philosopher Jeremy Bentham , open justice is the "keenest spur to exertion and the surest of all guards against improbity." [ 8 ] Knowledge that court trials are regularly public encourages further attendance by the public. [ 6 ] Further, openness can mean more accurate decisions during a trial; for example, the proceedings can spur a witness to come forth, or encourage others to submit new evidence or dispute publicized statements. [ 5 ] Openness reduces the chance that the judgment is a mistake or that a case might have to be re-tried because of a subsequent sanction of contempt. [ 5 ] Proponents argue that open justice benefits democracy in a general sense because citizens can see how particular laws affect particular people, and therefore citizens are in a better position to advise lawmakers about such laws. [ 6 ] It helps ensure public confidence in legal decision-making, according to proponents. [ 2 ] Proponents of open justice have argued that public scrutiny permits those interested to "tap into the collective wisdom of what passes for fairness in similar cases". [ 6 ] It facilitates a comparison of cases. [ 6 ] A British judge commented: This is the reason it is so important not to forget why proceedings are required to be subjected to the full glare of a public hearing. It is necessary because the public nature of proceedings deters inappropriate behaviour on the part of the court. It also maintains the public's confidence in the administration of justice. It enables the public to know that justice is being administered impartially. It can result in evidence becoming available which would not become available if the proceedings were conducted behind closed doors or with one or more of the parties' or witnesses' identity concealed. It makes uninformed and inaccurate comment about the proceedings less likely. If secrecy is restricted to those situations where justice would be frustrated if the cloak of anonymity is not provided, this reduces the risk of the sanction of contempt having to be invoked, with the expense and the interference with the administration of justice which this can involve. Still, practical considerations often mean that the ideal of open justice must be weighed against other values such as privacy and cost and national security. [ 2 ] There are some cases in which publicity in a courtroom proceeding can be detrimental. [ 5 ] In some cases, courts have opted to keep trials secret in proceedings against persons charged with terrorism , [ 11 ] to protect its intelligence gathering methods and contacts from exposure. In a case in Britain, in which a soldier was on trial for murdering an Afghan insurgent, there was an effort to keep the trial secret to protect him from possible future retribution, but there were calls for the identity of the soldier to be publicized based on the principle of open justice. [ 12 ] In situations when aspects of trials are kept secret, critics favoring open justice have argued that the secrecy is not needed for national security but is "nothing more than a useful drape to cover the inconvenient or the merely embarrassing." [ 8 ] Lawyers have often referred to the principle of open justice when disagreeing with a decision that was made, or calling for a retrial. In the United Kingdom , courts have tried to find the right balance between openness and secrecy, particularly in sensitive cases. [ 13 ] In the United States , there have been concerns that the principle of open justice has not been applied to cases of immigrants "wrongly ensnared in the post-9/11 law enforcement dragnet" who were denied access to lawyers and relatives and sometimes deported after secret removal proceedings. [ 14 ] There are other factors which sometimes must be balanced against the need for open justice. For example, there are situations in which the release of confidential information such as private financial records might harm the reputation of one of the parties. [ 5 ] In other situations, it may be necessary to protect the privacy of a minor . [ 5 ] A further case in which openness is seen as unnecessary are when legal matters involve uncontentious information unrelated to public issues, such as the financial division of an estate after a death. [ 5 ] Another factor sometimes working against the ideal of open justice is complexity; [ 10 ] according to one view, court proceedings over time "have evolved into a complex system that is hard for outsiders to understand." [ 7 ] In the aftermath of the Bridgegate scandal in New Jersey , an appellate judge ruled against releasing the identities of some persons involved in the scandal, on the grounds of being "sensitive to the privacy and reputation interests of uncharged third parties"; that is, releasing names to the media might unfairly tarnish reputations without a trial. [ 15 ] Another judge commented on tradeoffs which sometimes work against openness: A hearing, or any part of it, may be in private if publicity would defeat the object of the hearing; it involves matters relating to national security; it involves confidential information (including information relating to personal financial matters) and publicity would damage that confidentiality; a private hearing is necessary to protect the interests of any child or protected party; it is a hearing of an application made without notice and it would be unjust to any respondent for there to be a public hearing; it involves uncontentious matters arising in the administration of trusts or in the administration of a deceased person’s estate; or the court considers this to be necessary, in the interests of justice. There is an Open Justice Initiative within the legal community to make justice more transparent. [ 16 ]
https://en.wikipedia.org/wiki/Open_justice
In January 2015, Stephen Hawking , Elon Musk , and dozens of artificial intelligence experts [ 1 ] signed an open letter on artificial intelligence [ 2 ] calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable . [ 1 ] The four-paragraph letter, titled " Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter ", lays out detailed research priorities in an accompanying twelve-page document. [ 3 ] By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute , an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community, [ 4 ] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015. [ 5 ] The letter was made public on January 12. [ 6 ] The letter highlights both the positive and negative effects of artificial intelligence. [ 7 ] According to Bloomberg Business , Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk , and signatories such as Professor Oren Etzioni , who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks. [ 6 ] The letter contends that: The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. [ 8 ] One of the signatories, Professor Bart Selman of Cornell University , said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist. [ 4 ] Another signatory, Professor Francesca Rossi , stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues". [ 9 ] The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI ; our AI systems must "do what we want them to do". [ 1 ] The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science , such as computer security and formal verification . Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?"). [ 10 ] Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars . For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned? Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI. [ 4 ] The document closes by echoing Microsoft research director Eric Horvitz 's concerns that: we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"? Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem". [ 10 ] Signatories include physicist Stephen Hawking , business magnate Elon Musk , the entrepreneurs behind DeepMind and Vicarious , Google 's director of research Peter Norvig , [ 1 ] Professor Stuart J. Russell of the University of California, Berkeley , [ 11 ] and other AI experts, robot makers, programmers, and ethicists. [ 12 ] The original signatory count was over 150 people, [ 13 ] including academics from Cambridge, Oxford, Stanford, Harvard, and MIT. [ 14 ]
https://en.wikipedia.org/wiki/Open_letter_on_artificial_intelligence_(2015)
The Licence Ouverte / Open Licence is a French open license published on October 18, 2011 by Etalab [ fr ] for open data from the State of France . The license was designed to be compatible with Creative Commons Licenses , Open Government License , and the Open Data Commons Attribution License. [ 1 ] Information released under the Open License may be re-used with attribution, such as a URL or other identification of the producer. The Open License is used by the city of Bordeaux , France to release data sets. [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_licence_(French)
In complex analysis , the open mapping theorem states that if U {\displaystyle U} is a domain of the complex plane C {\displaystyle \mathbb {C} } and f : U → C {\displaystyle f:U\to \mathbb {C} } is a non-constant holomorphic function , then f {\displaystyle f} is an open map (i.e. it sends open subsets of U {\displaystyle U} to open subsets of C {\displaystyle \mathbb {C} } , and we have invariance of domain .). The open mapping theorem points to the sharp difference between holomorphy and real-differentiability. On the real line , for example, the differentiable function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} is not an open map, as the image of the open interval ( − 1 , 1 ) {\displaystyle (-1,1)} is the half-open interval [ 0 , 1 ) {\displaystyle [0,1)} . The theorem for example implies that a non-constant holomorphic function cannot map an open disk onto a portion of any line embedded in the complex plane. Images of holomorphic functions can be of real dimension zero (if constant) or two (if non-constant) but never of dimension 1. Assume f : U → C {\displaystyle f:U\to \mathbb {C} } is a non-constant holomorphic function and U {\displaystyle U} is a domain of the complex plane. We have to show that every point in f ( U ) {\displaystyle f(U)} is an interior point of f ( U ) {\displaystyle f(U)} , i.e. that every point in f ( U ) {\displaystyle f(U)} has a neighborhood (open disk) which is also in f ( U ) {\displaystyle f(U)} . Consider an arbitrary w 0 {\displaystyle w_{0}} in f ( U ) {\displaystyle f(U)} . Then there exists a point z 0 {\displaystyle z_{0}} in U {\displaystyle U} such that w 0 = f ( z 0 ) {\displaystyle w_{0}=f(z_{0})} . Since U {\displaystyle U} is open, we can find d > 0 {\displaystyle d>0} such that the closed disk B {\displaystyle B} around z 0 {\displaystyle z_{0}} with radius d {\displaystyle d} is fully contained in U {\displaystyle U} . Consider the function g ( z ) = f ( z ) − w 0 {\displaystyle g(z)=f(z)-w_{0}} . Note that z 0 {\displaystyle z_{0}} is a root of the function. We know that g ( z ) {\displaystyle g(z)} is non-constant and holomorphic. The roots of g {\displaystyle g} are isolated by the identity theorem , and by further decreasing the radius of the disk B {\displaystyle B} , we can assure that g ( z ) {\displaystyle g(z)} has only a single root in B {\displaystyle B} (although this single root may have multiplicity greater than 1). The boundary of B {\displaystyle B} is a circle and hence a compact set , on which | g ( z ) | {\displaystyle |g(z)|} is a positive continuous function , so the extreme value theorem guarantees the existence of a positive minimum e {\displaystyle e} , that is, e {\displaystyle e} is the minimum of | g ( z ) | {\displaystyle |g(z)|} for z {\displaystyle z} on the boundary of B {\displaystyle B} and e > 0 {\displaystyle e>0} . Denote by D {\displaystyle D} the open disk around w 0 {\displaystyle w_{0}} with radius e {\displaystyle e} . By Rouché's theorem , the function g ( z ) = f ( z ) − w 0 {\displaystyle g(z)=f(z)-w_{0}} will have the same number of roots (counted with multiplicity) in B {\displaystyle B} as h ( z ) := f ( z ) − w 1 {\displaystyle h(z):=f(z)-w_{1}} for any w 1 {\displaystyle w_{1}} in D {\displaystyle D} . This is because h ( z ) = g ( z ) + ( w 0 − w 1 ) {\displaystyle h(z)=g(z)+(w_{0}-w_{1})} , and for z {\displaystyle z} on the boundary of B {\displaystyle B} , | g ( z ) | ≥ e > | w 0 − w 1 | {\displaystyle |g(z)|\geq e>|w_{0}-w_{1}|} . Thus, for every w 1 {\displaystyle w_{1}} in D {\displaystyle D} , there exists at least one z 1 {\displaystyle z_{1}} in B {\displaystyle B} such that f ( z 1 ) = w 1 {\displaystyle f(z_{1})=w_{1}} . This means that the disk D {\displaystyle D} is contained in f ( B ) {\displaystyle f(B)} . The image of the ball B {\displaystyle B} , f ( B ) {\displaystyle f(B)} is a subset of the image of U {\displaystyle U} , f ( U ) {\displaystyle f(U)} . Thus w 0 {\displaystyle w_{0}} is an interior point of f ( U ) {\displaystyle f(U)} . Since w 0 {\displaystyle w_{0}} was arbitrary in f ( U ) {\displaystyle f(U)} we know that f ( U ) {\displaystyle f(U)} is open. Since U {\displaystyle U} was arbitrary, the function f {\displaystyle f} is open.
https://en.wikipedia.org/wiki/Open_mapping_theorem_(complex_analysis)
In functional analysis , the open mapping theorem , also known as the Banach–Schauder theorem or the Banach theorem [ 1 ] (named after Stefan Banach and Juliusz Schauder ), is a fundamental result that states that if a bounded or continuous linear operator between Banach spaces is surjective then it is an open map . A special case is also called the bounded inverse theorem (also called inverse mapping theorem or Banach isomorphism theorem), which states that a bijective bounded linear operator T {\displaystyle T} from one Banach space to another has bounded inverse T − 1 {\displaystyle T^{-1}} . Open mapping theorem — [ 2 ] [ 3 ] Let T : E → F {\displaystyle T:E\to F} be a surjective continuous linear map between Banach spaces (or more generally Fréchet spaces ). Then T {\displaystyle T} is an open mapping (that is, if U ⊂ E {\displaystyle U\subset E} is an open subset, then T ( U ) {\displaystyle T(U)} is open). The proof here uses the Baire category theorem , and completeness of both E {\displaystyle E} and F {\displaystyle F} is essential to the theorem. The statement of the theorem is no longer true if either space is assumed to be only a normed vector space ; see § Counterexample . The proof is based on the following lemmas, which are also somewhat of independent interest. A linear map f : E → F {\displaystyle f:E\to F} between topological vector spaces is said to be nearly open if, for each neighborhood U {\displaystyle U} of zero, the closure f ( U ) ¯ {\displaystyle {\overline {f(U)}}} contains a neighborhood of zero. The next lemma may be thought of as a weak version of the open mapping theorem. Lemma — [ 4 ] [ 5 ] A linear map f : E → F {\displaystyle f:E\to F} between normed spaces is nearly open if the image of f {\displaystyle f} is non-meager in F {\displaystyle F} . (The continuity is not needed.) Proof: Shrinking U {\displaystyle U} , we can assume U {\displaystyle U} is an open ball centered at zero. We have f ( E ) = f ( ⋃ n ∈ N n U ) = ⋃ n ∈ N f ( n U ) {\displaystyle f(E)=f\left(\bigcup _{n\in \mathbb {N} }nU\right)=\bigcup _{n\in \mathbb {N} }f(nU)} . Thus, some f ( n U ) ¯ {\displaystyle {\overline {f(nU)}}} contains an interior point y {\displaystyle y} ; that is, for some radius r > 0 {\displaystyle r>0} , Then for any v {\displaystyle v} in F {\displaystyle F} with ‖ v ‖ < r {\displaystyle \|v\|<r} , by linearity, convexity and ( − 1 ) U ⊂ U {\displaystyle (-1)U\subset U} , which proves the lemma by dividing by 2 n {\displaystyle 2n} . ◻ {\displaystyle \square } (The same proof works if E , F {\displaystyle E,F} are pre-Fréchet spaces.) The completeness on the domain then allows to upgrade nearly open to open. Lemma (Schauder) — [ 6 ] [ 7 ] Let f : E → F {\displaystyle f:E\to F} be a continuous linear map between normed spaces. If f {\displaystyle f} is nearly-open and if E {\displaystyle E} is complete, then f {\displaystyle f} is open and surjective. More precisely, if B ( 0 , δ ) ⊂ f ( B ( 0 , 1 ) ) ¯ {\displaystyle B(0,\delta )\subset {\overline {f(B(0,1))}}} for some δ > 0 {\displaystyle \delta >0} and if E {\displaystyle E} is complete, then where B ( x , r ) {\displaystyle B(x,r)} is an open ball with radius r {\displaystyle r} and center x {\displaystyle x} . Proof: Let y {\displaystyle y} be in B ( 0 , δ ) {\displaystyle B(0,\delta )} and c n > 0 {\displaystyle c_{n}>0} some sequence. We have: B ( 0 , δ ) ¯ ⊂ f ( B ( 0 , 1 ) ) ¯ {\displaystyle {\overline {B(0,\delta )}}\subset {\overline {f(B(0,1))}}} . Thus, for each ϵ > 0 {\displaystyle \epsilon >0} and z {\displaystyle z} in F {\displaystyle F} , we can find an x {\displaystyle x} with ‖ x ‖ < δ − 1 ‖ z ‖ {\displaystyle \|x\|<\delta ^{-1}\|z\|} and z {\displaystyle z} in B ( f ( x ) , ϵ ) {\displaystyle B(f(x),\epsilon )} . Thus, taking z = y {\displaystyle z=y} , we find an x 1 {\displaystyle x_{1}} such that Applying the same argument with z = y − f ( x 1 ) {\displaystyle z=y-f(x_{1})} , we then find an x 2 {\displaystyle x_{2}} such that where we observed ‖ x 2 ‖ < δ − 1 ‖ z ‖ < δ − 1 c 1 {\displaystyle \|x_{2}\|<\delta ^{-1}\|z\|<\delta ^{-1}c_{1}} . Then so on. Thus, if c := ∑ c n < ∞ {\displaystyle c:=\sum c_{n}<\infty } , we found a sequence x n {\displaystyle x_{n}} such that x = ∑ 1 ∞ x n {\displaystyle x=\sum _{1}^{\infty }x_{n}} converges and f ( x ) = y {\displaystyle f(x)=y} . Also, Since δ − 1 ‖ y ‖ < 1 {\displaystyle \delta ^{-1}\|y\|<1} , by making c {\displaystyle c} small enough, we can achieve ‖ x ‖ < 1 {\displaystyle \|x\|<1} . ◻ {\displaystyle \square } (Again the same proof is valid if E , F {\displaystyle E,F} are pre-Fréchet spaces.) Proof of the theorem: By Baire's category theorem, the first lemma applies. Then the conclusion of the theorem follows from the second lemma. ◻ {\displaystyle \square } In general, a continuous bijection between topological spaces is not necessarily a homeomorphism. The open mapping theorem, when it applies, implies the bijectivity is enough: Corollary (Bounded inverse theorem) — [ 8 ] A continuous bijective linear operator between Banach spaces (or Fréchet spaces) has continuous inverse. That is, the inverse operator is continuous. Even though the above bounded inverse theorem is a special case of the open mapping theorem, the open mapping theorem in turn follows from that. Indeed, a surjective continuous linear operator T : E → F {\displaystyle T:E\to F} factors as Here, T 0 {\displaystyle T_{0}} is continuous and bijective and thus is a homeomorphism by the bounded inverse theorem; in particular, it is an open mapping. As a quotient map for topological groups is open, T {\displaystyle T} is open then. Because the open mapping theorem and the bounded inverse theorem are essentially the same result, they are often simply called Banach's theorem . Here is a formulation of the open mapping theorem in terms of the transpose of an operator. Theorem — [ 6 ] Let X {\displaystyle X} and Y {\displaystyle Y} be Banach spaces, let B X {\displaystyle B_{X}} and B Y {\displaystyle B_{Y}} denote their open unit balls, and let T : X → Y {\displaystyle T:X\to Y} be a bounded linear operator. If δ > 0 {\displaystyle \delta >0} then among the following four statements we have ( 1 ) ⟹ ( 2 ) ⟹ ( 3 ) ⟹ ( 4 ) {\displaystyle (1)\implies (2)\implies (3)\implies (4)} (with the same δ {\displaystyle \delta } ) Furthermore, if T {\displaystyle T} is surjective then (1) holds for some δ > 0. {\displaystyle \delta >0.} Proof: The idea of 1. ⇒ {\displaystyle \Rightarrow } 2. is to show: y ∉ T ( B X ) ¯ ⇒ ‖ y ‖ > δ , {\displaystyle y\notin {\overline {T(B_{X})}}\Rightarrow \|y\|>\delta ,} and that follows from the Hahn–Banach theorem . 2. ⇒ {\displaystyle \Rightarrow } 3. is exactly the second lemma in § Statement and proof . Finally, 3. ⇒ {\displaystyle \Rightarrow } 4. is trivial and 4. ⇒ {\displaystyle \Rightarrow } 1. easily follows from the open mapping theorem. ◻ {\displaystyle \square } Alternatively, 1. implies that T ′ {\displaystyle T'} is injective and has closed image and then by the closed range theorem , that implies T {\displaystyle T} has dense image and closed image, respectively; i.e., T {\displaystyle T} is surjective. Hence, the above result is a variant of a special case of the closed range theorem. Terence Tao gives the following quantitative formulation of the theorem: [ 9 ] Theorem — Let T : E → F {\displaystyle T:E\to F} be a bounded operator between Banach spaces. Then the following are equivalent: The proof follows a cycle of implications 1 ⇒ 4 ⇒ 3 ⇒ 2 ⇒ 1 {\displaystyle 1\Rightarrow 4\Rightarrow 3\Rightarrow 2\Rightarrow 1} . Here 2 ⇒ 1 {\displaystyle 2\Rightarrow 1} is the usual open mapping theorem. 1 ⇒ 4 {\displaystyle 1\Rightarrow 4} : For some r > 0 {\displaystyle r>0} , we have B ( 0 , 2 ) ⊂ T ( B ( 0 , r ) ) {\displaystyle B(0,2)\subset T(B(0,r))} where B {\displaystyle B} means an open ball. Then f ‖ f ‖ = T ( u ‖ f ‖ ) {\displaystyle {\frac {f}{\|f\|}}=T\left({\frac {u}{\|f\|}}\right)} for some u ‖ f ‖ {\displaystyle {\frac {u}{\|f\|}}} in B ( 0 , r ) {\displaystyle B(0,r)} . That is, T u = f {\displaystyle Tu=f} with ‖ u ‖ < r ‖ f ‖ {\displaystyle \|u\|<r\|f\|} . 4 ⇒ 3 {\displaystyle 4\Rightarrow 3} : We can write f = ∑ 0 ∞ f j {\displaystyle f=\sum _{0}^{\infty }f_{j}} with f j {\displaystyle f_{j}} in the dense subspace and the sum converging in norm. Then, since E {\displaystyle E} is complete, u = ∑ 0 ∞ u j {\displaystyle u=\sum _{0}^{\infty }u_{j}} with ‖ u j ‖ ≤ C ‖ f j ‖ {\displaystyle \|u_{j}\|\leq C\|f_{j}\|} and T u j = f j {\displaystyle Tu_{j}=f_{j}} is a required solution. Finally, 3 ⇒ 2 {\displaystyle 3\Rightarrow 2} is trivial. ◻ {\displaystyle \square } The open mapping theorem may not hold for normed spaces that are not complete. A quickest way to see this is to note that the closed graph theorem , a consequence of the open mapping theorem, fails without completeness. But here is a more concrete counterexample. Consider the space X of sequences x : N → R with only finitely many non-zero terms equipped with the supremum norm . The map T : X → X defined by is bounded, linear and invertible, but T −1 is unbounded. This does not contradict the bounded inverse theorem since X is not complete , and thus is not a Banach space. To see that it's not complete, consider the sequence of sequences x ( n ) ∈ X given by converges as n → ∞ to the sequence x (∞) given by which has all its terms non-zero, and so does not lie in X . The completion of X is the space c 0 {\displaystyle c_{0}} of all sequences that converge to zero, which is a (closed) subspace of the ℓ p space ℓ ∞ ( N ), which is the space of all bounded sequences. However, in this case, the map T is not onto, and thus not a bijection. To see this, one need simply note that the sequence is an element of c 0 {\displaystyle c_{0}} , but is not in the range of T : c 0 → c 0 {\displaystyle T:c_{0}\to c_{0}} . Same reasoning applies to show T {\displaystyle T} is also not onto in l ∞ {\displaystyle l^{\infty }} , for example x = ( 1 , 1 , 1 , … ) {\displaystyle x=\left(1,1,1,\dots \right)} is not in the range of T {\displaystyle T} . The open mapping theorem has several important consequences: The open mapping theorem does not imply that a continuous surjective linear operator admits a continuous linear section. What we have is: [ 9 ] In particular, the above applies to an operator between Hilbert spaces or an operator with finite-dimensional kernel (by the Hahn–Banach theorem ). If one drops the requirement that a section be linear, a surjective continuous linear operator between Banach spaces admits a continuous section; this is the Bartle–Graves theorem . [ 13 ] [ 14 ] Local convexity of X {\displaystyle X} or Y {\displaystyle Y} is not essential to the proof, but completeness is: the theorem remains true in the case when X {\displaystyle X} and Y {\displaystyle Y} are F-spaces . Furthermore, the theorem can be combined with the Baire category theorem in the following manner: Open mapping theorem for continuous maps [ 12 ] [ 15 ] — Let A : X → Y {\displaystyle A:X\to Y} be a continuous linear operator from a complete pseudometrizable TVS X {\displaystyle X} onto a Hausdorff TVS Y . {\displaystyle Y.} If Im ⁡ A {\displaystyle \operatorname {Im} A} is nonmeager in Y {\displaystyle Y} then A : X → Y {\displaystyle A:X\to Y} is a (surjective) open map and Y {\displaystyle Y} is a complete pseudometrizable TVS. Moreover, if X {\displaystyle X} is assumed to be hausdorff (i.e. a F-space ), then Y {\displaystyle Y} is also an F-space. (The proof is essentially the same as the Banach or Fréchet cases; we modify the proof slightly to avoid the use of convexity,) Furthermore, in this latter case if N {\displaystyle N} is the kernel of A , {\displaystyle A,} then there is a canonical factorization of A {\displaystyle A} in the form X → X / N → α Y {\displaystyle X\to X/N{\overset {\alpha }{\to }}Y} where X / N {\displaystyle X/N} is the quotient space (also an F-space) of X {\displaystyle X} by the closed subspace N . {\displaystyle N.} The quotient mapping X → X / N {\displaystyle X\to X/N} is open, and the mapping α {\displaystyle \alpha } is an isomorphism of topological vector spaces . [ 16 ] An important special case of this theorem can also be stated as Theorem [ 17 ] — Let X {\displaystyle X} and Y {\displaystyle Y} be two F-spaces . Then every continuous linear map of X {\displaystyle X} onto Y {\displaystyle Y} is a TVS homomorphism , where a linear map u : X → Y {\displaystyle u:X\to Y} is a topological vector space (TVS) homomorphism if the induced map u ^ : X / ker ⁡ ( u ) → Y {\displaystyle {\hat {u}}:X/\ker(u)\to Y} is a TVS-isomorphism onto its image. On the other hand, a more general formulation, which implies the first, can be given: Open mapping theorem [ 15 ] — Let A : X → Y {\displaystyle A:X\to Y} be a surjective linear map from a complete pseudometrizable TVS X {\displaystyle X} onto a TVS Y {\displaystyle Y} and suppose that at least one of the following two conditions is satisfied: If A {\displaystyle A} is a closed linear operator then A {\displaystyle A} is an open mapping. If A {\displaystyle A} is a continuous linear operator and Y {\displaystyle Y} is Hausdorff then A {\displaystyle A} is (a closed linear operator and thus also) an open mapping. Nearly/Almost open linear maps A linear map A : X → Y {\displaystyle A:X\to Y} between two topological vector spaces (TVSs) is called a nearly open map (or sometimes, an almost open map ) if for every neighborhood U {\displaystyle U} of the origin in the domain, the closure of its image cl ⁡ A ( U ) {\displaystyle \operatorname {cl} A(U)} is a neighborhood of the origin in Y . {\displaystyle Y.} [ 18 ] Many authors use a different definition of "nearly/almost open map" that requires that the closure of A ( U ) {\displaystyle A(U)} be a neighborhood of the origin in A ( X ) {\displaystyle A(X)} rather than in Y , {\displaystyle Y,} [ 18 ] but for surjective maps these definitions are equivalent. A bijective linear map is nearly open if and only if its inverse is continuous. [ 18 ] Every surjective linear map from locally convex TVS onto a barrelled TVS is nearly open . [ 19 ] The same is true of every surjective linear map from a TVS onto a Baire TVS. [ 19 ] Open mapping theorem [ 20 ] — If a closed surjective linear map from a complete pseudometrizable TVS onto a Hausdorff TVS is nearly open then it is open. Theorem [ 21 ] — If A : X → Y {\displaystyle A:X\to Y} is a continuous linear bijection from a complete Pseudometrizable topological vector space (TVS) onto a Hausdorff TVS that is a Baire space , then A : X → Y {\displaystyle A:X\to Y} is a homeomorphism (and thus an isomorphism of TVSs). Webbed spaces are a class of topological vector spaces for which the open mapping theorem and the closed graph theorem hold. This article incorporates material from Proof of open mapping theorem on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Open_mapping_theorem_(functional_analysis)
The open metering system of the Open Metering System Group e.V. stands for a manufacturer- and media-independent standardization for Meter-Bus (M-Bus) based communication between utility meters (electricity, gas, water, thermal energy), submetering (cold/hot water, thermal energy, heat cost allocators ), and systems in the field of smart meters . In response to Directive 2006/32/EC on energy end-use efficiency and energy services of the European Union (in particular Article 13 of the Directive), several German multi-utility-companies (public utility offering more than only one type of supply like electricity, gas, water and district heating) joined and asked international manufacturers of meters intended for billing to create a common standard. The goal was to have meters with standardized communication interfaces and systems in the future. On the manufacturer side, members of the technical associations FIGAWA (German Association for Gas and Water), KNX and ZVEI (German Electrical and Electronics Industry Association) came together and, on the basis of the European Meter-Bus standard (EN 13757 Part 1 to Part 7), and the Dutch NTA 8130, have made joint specifications that guarantee manufacturer-independent interoperability. Several working groups – first in the Open Metering System initiative, since 2015 within the Open Metering System Group e. V. – have checked the application of existing standards for interoperable communication of measurement systems since May 2007 and developed additions and specifications. For the data transmission defined as primary communication between the actual meters and a gateway (e.g. a Smart Meter Gateway), the EN 13757-x series of standards has been identified as the currently applicable communication standard. This series of standards describes the M-Bus both as a physical interface, wired and wireless, as well as the data protocol. Both the OMS specification and the KNX standard use the EN 13757-4 standard for wireless communication. This means that both measurement data and data from the field of building automation can be transferred via the same system. Wide area communication is not the focus of the Open Metering System specification. This is solved with proven Internet standards, whereby the transmission should be independent of the physical medium as long as the necessary security mechanisms are observed. For data visualization (consumer display), the connection of the building automation at the end customer, and for future services (e.g. tariff or load management) devices are used that work according to the popular KNX standard (ISO/IEC 14543-3 = EN 50090). In the specification work, European concerns were also considered. On the basis of Mandate M/441 of the European Commission, smart metering should function with an open architecture including communication protocols that enable interoperability. For this purpose, the OMS-Group cooperated with KEMA (now DNV ) for harmonization with the Dutch regulations NTA 8130/DSMR. The requirements for data security and access protection were considered as a decisive prerequisite for the acceptance of intelligent metering systems. A device-specific encryption of the consumption data on the basis of common algorithms ( AES 128 ) is part of the OMS specification. Compliance with the Open Metering System specification can be checked using the OMS test specification and the OMS conformance test tool. The actual proof of OMS conformity is provided by checking the device through an independent testing institute and having issued a certificate by an independent certification body. The outcoming results have been brought into European via the Technical Committee CEN/TC 294 since 2009, which maintains and develops the EN 13757 series of standards. This means that essential components of the OMS specification have been incorporated into updated European Standards. The standards or the published draft standards are available to everyone for purchase. The OMS specification documents are in English and available for free. List of local grid operator policies for HAN (Home Area Network) remote reading. This gives an overview how energy savings and efficiency goals can be implemented in practice under local legislation and network operator policies. Energy consumption and savings are also important part of today's home automation integration which are able to visualize statistics in user interfaces. On some networks P1 is closed by default, operator activates it upon request EU directive 2019/944 says that consumer is entitled to connect own devices to the smart meter and receive metering information in real-time [ 5 ]
https://en.wikipedia.org/wiki/Open_metering_system
Open nomenclature is a vocabulary of partly informal terms and signs in which a taxonomist may express remarks about their own material. This is in contrast to synonymy lists , in which a taxonomist may express remarks on the work of others. [ 1 ] Commonly such remarks take the form of abbreviated taxonomic expressions in biological classification. [ 2 ] : 223 There are no strict conventions in open nomenclature concerning which expressions to use or where to place them in the Latin name of a species or other taxon , and this may lead to difficulties of interpretation. However, the most significant unsettled issues concern the way that their meanings are to be interpreted. The International Code of Zoological Nomenclature (ICZN) makes no reference to open nomenclature, leaving its use and meaning open for interpretation by taxonomists. [ 2 ] : 223 The following are examples of commonly used shorthand in open nomenclature:
https://en.wikipedia.org/wiki/Open_nomenclature
In computing, an open platform describes a software system which is based on open standards , such as published and fully documented external application programming interfaces (API) that allow using the software to function in other ways than the original programmer intended, without requiring modification of the source code. Using these interfaces, a third party could integrate with the platform to add functionality. [ 1 ] The opposite is a closed platform . An open platform does not mean it is open source , however most open platforms have multiple implementations of APIs. For example, Common Gateway Interface (CGI) is implemented by open source web servers as well as Microsoft Internet Information Server (IIS). An open platform can consist of software components or modules that are either proprietary or open source or both. It can also exist as a part of closed platform, such as CGI, which is an open platform, while many servers that implement CGI also have other proprietary parts that are not part of the open platform. An open platform implies that the vendor allows, and perhaps supports, the ability to do this. Using an open platform a developer could add features or functionality that the platform vendor had not completed or had not conceived of. An open platform allows the developer to change existing functionality, as the specifications are publicly available open standards. A service-oriented architecture allows applications, running as services, to be accessed in a distributed computing environment, such as between multiple systems or across the Internet. A major focus of Web services is to make functional building blocks accessible over standard Internet protocols that are independent from platforms and programming languages. An open SOA platform would allow anyone to access and interact with these building blocks. A 2008 Harvard Business School working paper, titled "Opening Platforms: How, When and Why?", differentiated a platform's openness in four aspects and gave example platforms. [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Open_platform
In physics , an open quantum system is a quantum -mechanical system that interacts with an external quantum system , which is known as the environment or a bath . In general, these interactions significantly change the dynamics of the system and result in quantum dissipation , such that the information contained in the system is lost to its environment. Because no quantum system is completely isolated from its surroundings, [ 1 ] it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems. Techniques developed in the context of open quantum systems have proven powerful in fields such as quantum optics , quantum measurement theory , quantum statistical mechanics , quantum information science, quantum thermodynamics , quantum cosmology , quantum biology , and semi-classical approximations. A complete description of a quantum system requires the inclusion of the environment. Completely describing the resulting combined system then requires the inclusion of its environment, which results in a new system that can only be completely described if its environment is included and so on. The eventual outcome of this process of embedding is the state of the whole universe described by a wavefunction Ψ {\displaystyle \Psi } . The fact that every quantum system has some degree of openness also means that no quantum system can ever be in a pure state . Even if the combined system is in a pure state and can be described by a wavefunction Ψ {\displaystyle \Psi } , a subsystem in general cannot be described by a wavefunction. This observation motivated the formalism of density matrices , or density operators, introduced by John von Neumann [ 2 ] in 1927 and independently, but less systematically by Lev Landau in 1927 and Felix Bloch in 1946. In general, the state of a subsystem is described by the density operator ρ {\displaystyle \rho } and the expectation value of an observable A {\displaystyle A} by the scalar product ( ρ ⋅ A ) = t r { ρ A } {\displaystyle (\rho \cdot A)={\rm {{tr}\{\rho A\}}}} . There is no way to know if the combined system is pure from the knowledge of observables of the subsystem alone. In particular, if the combined system has quantum entanglement , the state of the subsystem is not pure. In general, the time evolution of closed quantum systems is described by unitary operators acting on the system. For open systems, however, the interactions between the system and its environment make it so that the dynamics of the system cannot be accurately described using unitary operators alone. The time evolution of quantum systems can be determined by solving the effective equations of motion, also known as master equations , that govern how the density matrix describing the system changes over time and the dynamics of the observables that are associated with the system. In general, however, the environment that we want to model as being a part of our system is very large and complicated, which makes finding exact solutions to the master equations difficult, if not impossible. As such, the theory of open quantum systems seeks an economical treatment of the dynamics of the system and its observables. Typical observables of interest include things like energy and the robustness of quantum coherence (i.e. a measure of a state's coherence). Loss of energy to the environment is termed quantum dissipation , while loss of coherence is termed quantum decoherence . Due to the difficulty of determining the solutions to the master equations for a particular system and environment, a variety of techniques and approaches have been developed. A common objective is to derive a reduced description wherein the system's dynamics are considered explicitly and the bath's dynamics are described implicitly. The main assumption is that the entire system-environment combination is a large closed system. Therefore, its time evolution is governed by a unitary transformation generated by a global Hamiltonian . For the combined system bath scenario the global Hamiltonian can be decomposed into: where H S {\displaystyle H_{\rm {S}}} is the system's Hamiltonian, H B {\displaystyle H_{\rm {B}}} is the bath Hamiltonian and H S B {\displaystyle H_{\rm {SB}}} is the system-bath interaction. The state of the system can then be obtained from a partial trace over the combined system and bath: ρ S ( t ) = t r B { ρ S B ( t ) } {\displaystyle \rho _{\rm {S}}(t)={\rm {{tr}_{\rm {B}}\{\rho _{SB}(t)\}}}} . [ 3 ] Another common assumption that is used to make systems easier to solve is the assumption that the state of the system at the next moment depends only on the current state of the system. in other words, the system doesn't have a memory of its previous states. Systems that have this property are known as Markovian systems. This approximation is justified when the system in question has enough time for the system to relax to equilibrium before being perturbed again by interactions with its environment. For systems that have very fast or very frequent perturbations from their coupling to their environment, this approximation becomes much less accurate. When the interaction between the system and the environment is weak, a time-dependent perturbation theory seems appropriate for treating the evolution of the system. In other words, if the interaction between the system and its environment is weak, then any changes to the combined system over time can be approximated as originating from only the system in question. Another typical assumption is that the system and bath are initially uncorrelated ρ ( 0 ) = ρ S ⊗ ρ B {\displaystyle \rho (0)=\rho _{\rm {S}}\otimes \rho _{\rm {B}}} . This idea originated with Felix Bloch and was expanded upon by Alfred Redfield in his derivation of the Redfield equation . The Redfield equation is a Markovian master equation that describes the time evolution of the density matrix of the combined system. The drawback of the Redfield equation is that it does not conserve the positivity of the density operator. A formal construction of a local equation of motion with a Markovian property is an alternative to a reduced derivation. The theory is based on an axiomatic approach. The basic starting point is a completely positive map . The assumption is that the initial system-environment state is uncorrelated ρ ( 0 ) = ρ S ⊗ ρ B {\displaystyle \rho (0)=\rho _{\rm {S}}\otimes \rho _{\rm {B}}} and the combined dynamics is generated by a unitary operator . Such a map falls under the category of Kraus operator . The most general type of a time-homogeneous master equation with the Markovian property describing non-unitary evolution of the density matrix ρ that is trace-preserving and completely positive for any initial condition is the Gorini–Kossakowski–Sudarshan– Lindblad equation or GKSL equation: H S {\displaystyle H_{\rm {S}}} is a ( Hermitian ) Hamiltonian part and L D {\displaystyle {\cal {L}}_{\rm {D}}} : is the dissipative part describing implicitly through system operators V n {\displaystyle V_{n}} the influence of the bath on the system. The Markov property imposes that the system and bath are uncorrelated at all times ρ S B = ρ S ⊗ ρ B {\displaystyle \rho _{\rm {SB}}=\rho _{\rm {S}}\otimes \rho _{\rm {B}}} . The GKSL equation is unidirectional and leads any initial state ρ S {\displaystyle \rho _{\rm {S}}} to a steady state solution which is an invariant of the equation of motion ρ ˙ S ( t → ∞ ) = 0 {\displaystyle {\dot {\rho }}_{\rm {S}}(t\rightarrow \infty )=0} . The family of maps generated by the GKSL equation forms a Quantum dynamical semigroup . In some fields, such as quantum optics , the term Lindblad superoperator is often used to express the quantum master equation for a dissipative system. E.B. Davis derived the GKSL with Markovian property master equations using perturbation theory and additional approximations, such as the rotating wave or secular, thus fixing the flaws of the Redfield equation . Davis construction is consistent with the Kubo-Martin-Schwinger stability criterion for thermal equilibrium i.e. the KMS state . [ 4 ] An alternative approach to fix the Redfield has been proposed by J. Thingna, J.-S. Wang, and P. Hänggi [ 5 ] that allows for system-bath interaction to play a role in equilibrium differing from the KMS state. In 1981, Amir Caldeira and Anthony J. Leggett proposed a simplifying assumption in which the bath is decomposed to normal modes represented as harmonic oscillators linearly coupled to the system. [ 6 ] As a result, the influence of the bath can be summarized by the bath spectral function. This method is known as the Caldeira–Leggett model , or harmonic bath model. To proceed and obtain explicit solutions, the path integral formulation description of quantum mechanics is typically employed. A large part of the power behind this method is the fact that harmonic oscillators are relatively well-understood compared to the true coupling that exists between the system and the bath. Unfortunately, while the Caldeira-Leggett model is one that leads to a physically consistent picture of quantum dissipation, its ergodic properties are too weak and so the dynamics of the model do not generate wide-scale quantum entanglement between the bath modes. An alternative bath model is a spin bath. [ 7 ] At low temperatures and weak system-bath coupling, the Caldeira-Leggett and spin bath models are equivalent. But for higher temperatures or strong system-bath coupling, the spin bath model has strong ergodic properties. Once the system is coupled, significant entanglement is generated between all modes. In other words, the spin bath model can simulate the Caldeira-Leggett model, but the opposite is not true. An example of natural system being coupled to a spin bath is a nitrogen-vacancy (N-V) center in diamonds. In this example, the color center is the system and the bath consists of carbon-13 ( 13 C) impurities which interact with the system via the magnetic dipole-dipole interaction . For open quantum systems where the bath has oscillations that are particularly fast, it is possible to average them out by looking at sufficiently large changes in time. This is possible because the average amplitude of fast oscillations over a large time scale is equal to the central value, which can always be chosen to be zero with a minor shift along the vertical axis. This method of simplifying problems is known as the secular approximation. Open quantum systems that do not have the Markovian property are generally much more difficult to solve. This is largely due to the fact that the next state of a non-Markovian system is determined by each of its previous states, which rapidly increases the memory requirements to compute the evolution of the system. Currently, the methods of treating these systems employ what are known as projection operator techniques. These techniques employ a projection operator P {\displaystyle {\mathcal {P}}} , which effectively applies the trace over the environment as described previously. The result of applying P {\displaystyle {\mathcal {P}}} to ρ {\displaystyle \rho } (i.e. calculating P ρ {\displaystyle {\mathcal {P}}\rho } ) is called the relevant part of ρ {\displaystyle \rho } . For completeness, another operator Q {\displaystyle {\mathcal {Q}}} is defined so that P + Q = I {\displaystyle {\mathcal {P}}+{\mathcal {Q}}={\mathcal {I}}} where I {\displaystyle {\mathcal {I}}} is the identity matrix. The result of applying Q {\displaystyle {\mathcal {Q}}} to ρ {\displaystyle \rho } (i.e. calculating Q ρ {\displaystyle {\mathcal {Q}}\rho } ) is called the irrelevant part of ρ {\displaystyle \rho } . The primary goal of these methods is to then derive a master equation that defines the evolution of P ρ {\displaystyle {\mathcal {P}}\rho } . One such derivation using the projection operator technique results in what is known as the Nakajima–Zwanzig equation . This derivation highlights the problem of the reduced dynamics being non-local in time: Here the effect of the bath throughout the time evolution of the system is hidden in the memory kernel κ ( τ ) {\displaystyle \kappa (\tau )} . While the Nakajima-Zwanzig equation is an exact equation that holds for almost all open quantum systems and environments, it can be very difficult to solve. This means that approximations generally need to be introduced to reduce the complexity of the problem into something more manageable. As an example, the assumption of a fast bath is required to lead to a time local equation: ∂ t ρ S = L ρ S {\displaystyle \partial _{t}\rho _{S}={\cal {L}}\rho _{S}} . Other examples of valid approximations include the weak-coupling approximation and the single-coupling approximation. In some cases, the projection operator technique can be used to reduce the dependence of the system's next state on all of its previous states. This method of approaching open quantum systems is known as the time-convolutionless projection operator technique, and it is used to generate master equations that are inherently local in time. Because these equations can neglect more of the history of the system, they are often easier to solve than things like the Nakajima-Zwanzig equation. Another approach emerges as an analogue of classical dissipation theory developed by Ryogo Kubo and Y. Tanimura. This approach is connected to hierarchical equations of motion which embed the density operator in a larger space of auxiliary operators such that a time local equation is obtained for the whole set and their memory is contained in the auxiliary operators.
https://en.wikipedia.org/wiki/Open_quantum_system
In molecular biology , reading frames are defined as spans of DNA sequence between the start and stop codons . Usually, this is considered within a studied region of a prokaryotic DNA sequence, where only one of the six possible reading frames will be "open" (the "reading", however, refers to the RNA produced by transcription of the DNA and its subsequent interaction with the ribosome in translation ). Such an open reading frame (ORF) may [ 1 ] contain a start codon (usually AUG in terms of RNA ) and by definition cannot extend beyond a stop codon (usually UAA, UAG or UGA in RNA). [ 2 ] That start codon (not necessarily the first) indicates where translation may start. The transcription termination site is located after the ORF, beyond the translation stop codon. If transcription were to cease before the stop codon, an incomplete protein would be made during translation. [ 3 ] In eukaryotic genes with multiple exons , introns are removed and exons are then joined together after transcription to yield the final mRNA for protein translation. In the context of gene finding , the start-stop definition of an ORF therefore only applies to spliced mRNAs , not genomic DNA, since introns may contain stop codons and/or cause shifts between reading frames. An alternative definition says that an ORF is a sequence that has a length divisible by three and is bounded by stop codons. [ 1 ] [ 4 ] This more general definition can be useful in the context of transcriptomics and metagenomics , where a start or stop codon may not be present in the obtained sequences. Such an ORF corresponds to parts of a gene rather than the complete gene. One common use of open reading frames (ORFs) is as one piece of evidence to assist in gene prediction . Long ORFs are often used, along with other evidence, to initially identify candidate protein-coding regions or functional RNA -coding regions in a DNA sequence. [ 5 ] The presence of an ORF does not necessarily mean that the region is always translated . For example, in a randomly generated DNA sequence with an equal percentage of each nucleotide , a stop-codon would be expected once every 21 codons . [ 5 ] A simple gene prediction algorithm for prokaryotes might look for a start codon followed by an open reading frame that is long enough to encode a typical protein, where the codon usage of that region matches the frequency characteristic for the given organism's coding regions. [ 5 ] Therefore, some authors say that an ORF should have a minimal length, e.g. 100 codons [ 6 ] or 150 codons. [ 5 ] By itself even a long open reading frame is not conclusive evidence for the presence of a gene . [ 5 ] Some short open reading frames , [ 7 ] also named small open reading frames , [ 8 ] abbreviated as sORFs or smORFs , usually < 100 codons in length, [ 9 ] that lack the classical hallmarks of protein-coding genes (both from ncRNAs and mRNAs) can produce functional peptides . [ 10 ] They encode microproteins or sORF‐encoded proteins (SEPs). The 5’-UTR of about 50% of mammal mRNAs are known to contain one or several sORFs, [ 11 ] also called upstream ORFs or uORFs. However, less than 10% of the vertebrate mRNAs surveyed in an older study contained AUG codons in front of the major ORF. Interestingly, uORFs were found in two thirds of proto-oncogenes and related proteins. [ 12 ] 64–75% of experimentally found translation initiation sites of sORFs are conserved in the genomes of human and mouse and may indicate that these elements have function. [ 13 ] However, sORFs can often be found only in the minor forms of mRNAs and avoid selection; the high conservation of initiation sites may be connected with their location inside promoters of the relevant genes. This is characteristic of SLAMF1 gene, for example. [ 14 ] Since DNA is interpreted in groups of three nucleotides (codons), a DNA strand has three distinct reading frames. [ 15 ] The double helix of a DNA molecule has two anti-parallel strands; with the two strands having three reading frames each, there are six possible frame translations. [ 15 ] The ORF Finder (Open Reading Frame Finder) [ 16 ] is a graphical analysis tool which finds all open reading frames of a selectable minimum size in a user's sequence or in a sequence already in the database. This tool identifies all open reading frames using the standard or alternative genetic codes. The deduced amino acid sequence can be saved in various formats and searched against the sequence database using the basic local alignment search tool (BLAST) server. The ORF Finder should be helpful in preparing complete and accurate sequence submissions. It is also packaged with the Sequin sequence submission software (sequence analyser). ORF Investigator [ 17 ] is a program which not only gives information about the coding and non coding sequences but also can perform pairwise global alignment of different gene/DNA regions sequences. The tool efficiently finds the ORFs for corresponding amino acid sequences and converts them into their single letter amino acid code, and provides their locations in the sequence. The pairwise global alignment between the sequences makes it convenient to detect the different mutations, including single nucleotide polymorphism . Needleman–Wunsch algorithms are used for the gene alignment. The ORF Investigator is written in the portable Perl programming language , and is therefore available to users of all common operating systems. OrfPredictor [ 18 ] is a web server designed for identifying protein-coding regions in expressed sequence tag (EST)-derived sequences. For query sequences with a hit in BLASTX, the program predicts the coding regions based on the translation reading frames identified in BLASTX alignments, otherwise, it predicts the most probable coding region based on the intrinsic signals of the query sequences. The output is the predicted peptide sequences in the FASTA format , and a definition line that includes the query ID, the translation reading frame and the nucleotide positions where the coding region begins and ends. OrfPredictor facilitates the annotation of EST-derived sequences, particularly, for large-scale EST projects. ORF Predictor uses a combination of the two different ORF definitions mentioned above. It searches stretches starting with a start codon and ending at a stop codon. As an additional criterion, it searches for a stop codon in the 5' untranslated region (UTR or NTR, nontranslated region [ 19 ] ). The OrfPredictor web server was not further supported, the standalone OrfPredictor tool can be downloaded at the following site ( http://bioinformatics.ysu.edu/publication/tools_download/ ). ORFik is a R-package in Bioconductor for finding open reading frames and using Next generation sequencing technologies for justification of ORFs. [ 20 ] [ 21 ] orfipy is a tool written in Python / Cython to extract ORFs in an extremely and fast and flexible manner. [ 22 ] orfipy can work with plain or gzipped FASTA and FASTQ sequences, and provides several options to fine-tune ORF searches; these include specifying the start and stop codons, reporting partial ORFs, and using custom translation tables. The results can be saved in multiple formats, including the space-efficient BED format. orfipy is particularly faster for data containing multiple smaller FASTA sequences, such as de-novo transcriptome assemblies. [ 23 ]
https://en.wikipedia.org/wiki/Open_reading_frame
In atomic physics and quantum chemistry , the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals . [ 1 ] For example, the electron configuration of the neon atom is 1s 2 2s 2 2p 6 , meaning that the 1s, 2s, and 2p subshells are occupied by two, two, and six electrons, respectively. Electronic configurations describe each electron as moving independently in an orbital , in an average field created by the nuclei and all the other electrons. Mathematically, configurations are described by Slater determinants or configuration state functions . According to the laws of quantum mechanics , a level of energy is associated with each electron configuration. In certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon . Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements , for describing the chemical bonds that hold atoms together, and in understanding the chemical formulas of compounds and the geometries of molecules . In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors . Electron configuration was first conceived under the Bohr model of the atom , and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons . An electron shell is the set of allowed states that share the same principal quantum number , n , that electrons may occupy. In each term of an electron configuration, n is the positive integer that precedes each orbital letter ( helium 's electron configuration is 1s 2 , therefore n = 1, and the orbital contains two electrons). An atom's n th electron shell can accommodate 2 n 2 electrons. For example, the first shell can accommodate two electrons, the second shell eight electrons, the third shell eighteen, and so on. The factor of two arises because the number of allowed states doubles with each successive shell due to electron spin —each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin + 1 ⁄ 2 (usually denoted by an up-arrow) and one with a spin of − 1 ⁄ 2 (with a down-arrow). A subshell is the set of states defined by a common azimuthal quantum number , l , within a shell. The value of l is in the range from 0 to n − 1. The values l = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. For example, the 3d subshell has n = 3 and l = 2. The maximum number of electrons that can be placed in a subshell is given by 2(2 l + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics, [ a ] in particular the Pauli exclusion principle , which states that no two electrons in the same atom can have the same values of the four quantum numbers . [ 2 ] Exhaustive technical details about the complete quantum mechanical theory of atomic spectra and structure can be found and studied in the basic book of Robert D. Cowan. [ 3 ] Physicists and chemists use a standard notation to indicate the electron configurations of atoms and molecules. For atoms, the notation consists of a sequence of atomic subshell labels (e.g. for phosphorus the sequence 1s, 2s, 2p, 3s, 3p) with the number of electrons assigned to each subshell placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s 1 . Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s 2 2s 1 (pronounced "one-s-two, two-s-one"). Phosphorus ( atomic number 15) is as follows: 1s 2 2s 2 2p 6 3s 2 3p 3 . For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used. The electron configuration can be visualized as the core electrons , equivalent to the noble gas of the preceding period , and the valence electrons : each element in a period differs only by the last few subshells. Phosphorus, for instance, is in the third period. It differs from the second-period neon , whose configuration is 1s 2 2s 2 2p 6 , only by the presence of a third shell. The portion of its configuration that is equivalent to neon is abbreviated as [Ne], allowing the configuration of phosphorus to be written as [Ne] 3s 2 3p 3 rather than writing out the details of the configuration of neon explicitly. This convention is useful as it is the electrons in the outermost shell that most determine the chemistry of the element. For a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the electron configuration of the titanium ground state can be written as either [Ar] 4s 2 3d 2 or [Ar] 3d 2 4s 2 . The first notation follows the order based on the Madelung rule for the configurations of neutral atoms; 4s is filled before 3d in the sequence Ar, K, Ca, Sc, Ti. The second notation groups all orbitals with the same value of n together, corresponding to the "spectroscopic" order of orbital energies that is the reverse of the order in which electrons are removed from a given atom to form positive ions; 3d is filled before 4s in the sequence Ti 4+ , Ti 3+ , Ti 2+ , Ti + , Ti. The superscript 1 for a singly occupied subshell is not compulsory; for example aluminium may be written as either [Ne] 3s 2 3p 1 or [Ne] 3s 2 3p. In atoms where a subshell is unoccupied despite higher subshells being occupied (as is the case in some ions, as well as certain neutral atoms shown to deviate from the Madelung rule ), the empty subshell is either denoted with a superscript 0 or left out altogether. For example, neutral palladium may be written as either [Kr] 4d 10 5s 0 or simply [Kr] 4d 10 , and the lanthanum(III) ion may be written as either [Xe] 4f 0 or simply [Xe]. [ 4 ] It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as " s harp ", " p rincipal ", " d iffuse " and " f undamental " (or " f ine"), based on their observed fine structure : their modern usage indicates orbitals with an azimuthal quantum number , l , of 0, 1, 2 or 3 respectively. After f, the sequence continues alphabetically g, h, i... ( l = 4, 5, 6...), skipping j, although orbitals of these types are rarely required. [ 5 ] [ 6 ] The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below). The energy associated to an electron is that of its orbital. The energy of a configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground state . Any other configuration is an excited state . As an example, the ground state configuration of the sodium atom is 1s 2 2s 2 2p 6 3s 1 , as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p subshell, to obtain the 1s 2 2s 2 2p 6 3p 1 configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm. Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to X-ray photons. This would be the case for example to excite a 2p electron of sodium to the 3s level and form the excited 1s 2 2s 2 2p 5 3s 2 configuration. The remainder of this article deals only with the ground-state configuration, often referred to as "the" configuration of an atom or molecule. Irving Langmuir was the first to propose in his 1919 article "The Arrangement of Electrons in Atoms and Molecules" in which, building on Gilbert N. Lewis 's cubical atom theory and Walther Kossel 's chemical bonding theory, he outlined his "concentric theory of atomic structure". [ 7 ] Langmuir had developed his work on electron atomic structure from other chemists as is shown in the development of the History of the periodic table and the Octet rule . Niels Bohr (1923) incorporated Langmuir's model that the periodicity in the properties of the elements might be explained by the electronic structure of the atom. [ 8 ] His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as 2.4.4.6 instead of 1s 2 2s 2 2p 6 3s 2 3p 4 (2.8.6). Bohr used 4 and 6 following Alfred Werner 's 1893 paper. In fact, the chemists accepted the concept of atoms long before the physicists. Langmuir began his paper referenced above by saying, «…The problem of the structure of atoms has been attacked mainly by physicists who have given little consideration to the chemical properties which must ultimately be explained by a theory of atomic structure. The vast store of knowledge of chemical properties and relationships, such as is summarized by the Periodic Table, should serve as a better foundation for a theory of atomic structure than the relatively meager experimental data along purely physical lines... These electrons arrange themselves in a series of concentric shells, the first shell containing two electrons, while all other shells tend to hold eight .…» The valence electrons in the atom were described by Richard Abegg in 1904. [ 9 ] In 1924, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6. [ 10 ] However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect ). Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli in 1923 to ask for his help in saving quantum theory (the system now known as " old quantum theory "). Pauli hypothesized successfully that the Zeeman effect can be explained as depending only on the response of the outermost (i.e., valence) electrons of the atom. Pauli was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925): [ 11 ] It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [ l ], j [ m l ] and m [ m s ]. The Schrödinger equation , published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom: [ a ] this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936), [ 12 ] see below) for the order in which atomic orbitals are filled with electrons. The aufbau principle (from the German Aufbau , "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as: [ 13 ] The principle works very well (for the ground states of the atoms) for the known 118 elements, although it is sometimes slightly wrong. The modern form of the aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule) . This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936, [ 12 ] and later given a theoretical justification by V. M. Klechkowski : [ 14 ] This gives the following order for filling the orbitals: In this list the subshells in parentheses are not occupied in the ground state of the heaviest atom now known ( Og , Z = 118). The aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus , as in the shell model of nuclear physics and nuclear chemistry . The form of the periodic table is closely related to the atomic electron configuration for each element. For example, all the elements of group 2 (the table's second column) have an electron configuration of [E] n s 2 (where [E] is a noble gas configuration), and have notable similarities in their chemical properties. The periodicity of the periodic table in terms of periodic table blocks is due to the number of electrons (2, 6, 10, and 14) needed to fill s, p, d, and f subshells. These blocks appear as the rectangular sections of the periodic table. The single exception is helium , which despite being an s-block atom is conventionally placed with the other noble gasses in the p-block due to its chemical inertness, a consequence of its full outer shell (though there is discussion in the contemporary literature on whether this exception should be retained). The electrons in the valence (outermost) shell largely determine each element's chemical properties . The similarities in the chemical properties were remarked on more than a century before the idea of electron configuration. [ b ] The aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; in both cases this is only approximately true. It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However, the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions that cannot be calculated exactly [ c ] (although there are mathematical approximations available, such as the Hartree–Fock method ). The fact that the aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogen-like atom , which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects of the Lamb shift .) The naïve application of the aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals . Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s 1 and [Ar] 4s 2 respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n + l = 4 ( n = 4, l = 0) while the 3d-orbital has n + l = 5 ( n = 3, l = 2). After calcium, most neutral atoms in the first series of transition metals ( scandium through zinc ) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d 5 4s 1 and [Ar] 3d 10 4s 1 respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons". However, this is not supported by the facts, as tungsten (W) has a Madelung-following d 4 s 2 configuration and not d 5 s 1 , and niobium (Nb) has an anomalous d 4 s 1 configuration that does not give it a half-filled or completely filled subshell. [ 15 ] The apparent paradox arises when electrons are removed from the transition metal atoms to form ions . The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. This interchange of electrons between 4s and 3d is found for all atoms of the first series of transition metals. [ d ] The configurations of the neutral atoms (K, Ca, Sc, Ti, V, Cr, ...) usually follow the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, ...; however the successive stages of ionization of a given atom (such as Fe 4+ , Fe 3+ , Fe 2+ , Fe + , Fe) usually follow the order 1s, 2s, 2p, 3s, 3p, 3d, 4s, ... This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly does not. There is no special reason why the Fe 2+ ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree–Fock method of atomic structure calculation. [ 16 ] More recently Scerri has argued that contrary to what is stated in the vast majority of sources including the title of his previous article on the subject, 3d orbitals rather than 4s are in fact preferentially occupied. [ 17 ] In chemical environments, configurations can change even more: Th 3+ as a bare ion has a configuration of [Rn] 5f 1 , yet in most Th III compounds the thorium atom has a 6d 1 configuration instead. [ 18 ] [ 19 ] Mostly, what is present is rather a superposition of various configurations. [ 15 ] For instance, copper metal is poorly described by either an [Ar] 3d 10 4s 1 or an [Ar] 3d 9 4s 2 configuration, but is rather well described as a 90% contribution of the first and a 10% contribution of the second. Indeed, visible light is already enough to excite electrons in most transition metals, and they often continuously "flow" through different configurations when that happens (copper and its group are an exception). [ 20 ] Similar ion-like 3d x 4s 0 configurations occur in transition metal complexes as described by the simple crystal field theory , even if the metal has oxidation state 0. For example, chromium hexacarbonyl can be described as a chromium atom (not ion) surrounded by six carbon monoxide ligands . The electron configuration of the central chromium atom is described as 3d 6 with the six electrons filling the three lower-energy d orbitals between the ligands. The other two d orbitals are at higher energy due to the crystal field of the ligands. This picture is consistent with the experimental fact that the complex is diamagnetic , meaning that it has no unpaired electrons. However, in a more accurate description using molecular orbital theory , the d-like orbitals occupied by the six electrons are no longer identical with the d orbitals of the free atom. There are several more exceptions to Madelung's rule among the heavier elements, and as atomic number increases it becomes more and more difficult to find simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations, [ 21 ] which are an approximate method for taking account of the effect of the other electrons on orbital energies. Qualitatively, for example, the 4d elements have the greatest concentration of Madelung anomalies, because the 4d–5s gap is larger than the 3d–4s and 5d–6s gaps. [ 22 ] For the heavier elements, it is also necessary to take account of the effects of special relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light . In general, these relativistic effects [ 23 ] tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals. [ 24 ] This is the reason why the 6d elements are predicted to have no Madelung anomalies apart from lawrencium (for which relativistic effects stabilise the p 1/2 orbital as well and cause its occupancy in the ground state), as relativity intervenes to make the 7s orbitals lower in energy than the 6d ones. The table below shows the configurations of the f-block (green) and d-block (blue) atoms. It shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page) . However this also depends on the charge: a calcium atom has 4s lower in energy than 3d, but a Ca 2+ cation has 3d lower in energy than 4s. In practice the configurations predicted by the Madelung rule are at least close to the ground state even in these anomalous cases. [ 25 ] The empty f orbitals in lanthanum, actinium, and thorium contribute to chemical bonding, [ 26 ] [ 27 ] as do the empty p orbitals in transition metals. [ 28 ] Vacant s, d, and f orbitals have been shown explicitly, as is occasionally done, [ 29 ] to emphasise the filling order and to clarify that even orbitals unoccupied in the ground state (e.g. lanthanum 4f or palladium 5s) may be occupied and bonding in chemical compounds. (The same is also true for the p-orbitals, which are not explicitly shown because they are only actually occupied for lawrencium in gas-phase ground states.) The various anomalies describe the free atoms and do not necessarily predict chemical behavior. Thus for example neodymium typically forms the +3 oxidation state, despite its configuration [Xe] 4f 4 5d 0 6s 2 that if interpreted naïvely would suggest a more stable +2 oxidation state corresponding to losing only the 6s electrons. Contrariwise, uranium as [Rn] 5f 3 6d 1 7s 2 is not very stable in the +3 oxidation state either, preferring +4 and +6. [ 33 ] The electron-shell configuration of elements beyond hassium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120 . Element 121 should have the anomalous configuration [ Og ] 8s 2 5g 0 6f 0 7d 0 8p 1 , having a p rather than a g electron. Electron configurations beyond this are tentative and predictions differ between models, [ 34 ] but Madelung's rule is expected to break down due to the closeness in energy of the 5g, 6f, 7d, and 8p 1/2 orbitals. [ 31 ] That said, the filling sequence 8s, 5g, 6f, 7d, 8p is predicted to hold approximately, with perturbations due to the huge spin-orbit splitting of the 8p and 9p shells, and the huge relativistic stabilisation of the 9s shell. [ 35 ] In the context of atomic orbitals , an open shell is a valence shell which is not completely filled with electrons or that has not given all of its valence electrons through chemical bonds with other atoms or molecules during a chemical reaction . Conversely a closed shell is obtained with a completely filled valence shell. This configuration is very stable . [ 36 ] For molecules, "open shell" signifies that there are unpaired electrons . In molecular orbital theory, this leads to molecular orbitals that are singly occupied. In computational chemistry implementations of molecular orbital theory, open-shell molecules have to be handled by either the restricted open-shell Hartree–Fock method or the unrestricted Hartree–Fock method. Conversely a closed-shell configuration corresponds to a state where all molecular orbitals are either doubly occupied or empty (a singlet state ). [ 37 ] Open shell molecules are more difficult to study computationally. [ 38 ] Noble gas configuration is the electron configuration of noble gases . The basis of all chemical reactions is the tendency of chemical elements to acquire stability . Main-group atoms generally obey the octet rule , while transition metals generally obey the 18-electron rule . The noble gases ( He , Ne , Ar , Kr , Xe , Rn ) are less reactive than other elements because they already have a noble gas configuration. Oganesson is predicted to be more reactive due to relativistic effects for heavy atoms. Every system has the tendency to acquire the state of stability or a state of minimum energy, and so chemical elements take part in chemical reactions to acquire a stable electronic configuration similar to that of its nearest noble gas . An example of this tendency is two hydrogen (H) atoms reacting with one oxygen (O) atom to form water (H 2 O). Neutral atomic hydrogen has one electron in its valence shell , and on formation of water it acquires a share of a second electron coming from oxygen, so that its configuration is similar to that of its nearest noble gas helium (He) with two electrons in its valence shell. Similarly, neutral atomic oxygen has six electrons in its valence shell, and acquires a share of two electrons from the two hydrogen atoms, so that its configuration is similar to that of its nearest noble gas neon with eight electrons in its valence shell. Electron configuration in molecules is more complex than the electron configuration of atoms, as each molecule has a different orbital structure . The molecular orbitals are labelled according to their symmetry , [ e ] rather than the atomic orbital labels used for atoms and monatomic ions ; hence, the electron configuration of the dioxygen molecule, O 2 , is written 1σ g 2 1σ u 2 2σ g 2 2σ u 2 3σ g 2 1π u 4 1π g 2 , [ 39 ] [ 40 ] or equivalently 1σ g 2 1σ u 2 2σ g 2 2σ u 2 1π u 4 3σ g 2 1π g 2 . [ 1 ] The term 1π g 2 represents the two electrons in the two degenerate π*-orbitals ( antibonding ). From Hund's rules , these electrons have parallel spins in the ground state , and so dioxygen has a net magnetic moment (it is paramagnetic ). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory . The electronic configuration of polyatomic molecules can change without absorption or emission of a photon through vibronic couplings . In a solid , the electron states become very numerous. They cease to be discrete, and effectively blend into continuous ranges of possible states (an electron band ). The notion of electron configuration ceases to be relevant, and yields to band theory . The most widespread application of electron configurations is in the rationalization of chemical properties , in both inorganic and organic chemistry . In effect, electron configurations, along with some simplified forms of molecular orbital theory , have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form. This approach is taken further in computational chemistry , which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the " linear combination of atomic orbitals " (LCAO) approximation, using an ever-larger and more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the aufbau principle. Not all methods in computational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method that discards the model. For atoms or molecules with more than one electron , the motion of electrons are correlated and such a picture is no longer exact. A very large number of electronic configurations are needed to exactly describe any multi-electron system, and no energy can be associated with one single configuration. However, the electronic wave function is usually dominated by a very small number of configurations and therefore the notion of electronic configuration remains essential for multi-electron systems. A fundamental application of electron configurations is in the interpretation of atomic spectra . In this case, it is necessary to supplement the electron configuration with one or more term symbols , which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined.
https://en.wikipedia.org/wiki/Open_shell
Open synthetic biology is the idea that scientific knowledge and data should be openly accessible through common rights licensing to enable the rapid development of safe, effective and commercially viable synthetic biology applications. Its foundational concepts are open science and the Bermuda Principles . [ 1 ] Open science is the idea that scientific research should be openly shared to enable massive collaboration (e.g., the Polymath Project ). The Bermuda Principles is a private accord declaring that all DNA sequence data should be released in publicly accessible databases within 24 hours after generation. Open synthetic biology is a theoretical framework supporting a global ecosystem of responsible and capable research scientists working collaboratively on synthetic biology application development projects to reduce cost, [ 2 ] time, and risks of developing new synthetic biology applications (including open synthetic biology therapeutics) from the inception of primary science to applications reaching market readiness and commercial viability. Its general principle is that participating research scientists agree to share research, data, findings and results with the open synthetic biology community and the public generally. The Open SynBio community will set standards and expectations of the participants and their "science to market" process and the community will work collaboratively with downstream stakeholders (e.g., investors and business advisors) to ensure public safety and general availability of new synthetic biology applications. One example of open synthetic biology is when DNA2.0 donated several artificial gene sequences into an open-access repository run by the BioBricks Foundation . [ 3 ]
https://en.wikipedia.org/wiki/Open_synthetic_biology
Open systems are computer systems that provide some combination of interoperability , portability , and open software standards . (It can also refer to specific installations that are configured to allow unrestricted access by people and/or other computers; this article does not discuss that meaning). The term was popularized in the early 1980s, mainly to describe systems based on Unix , especially in contrast to the more entrenched mainframes and minicomputers in use at that time. Unlike older legacy systems , the newer generation of Unix systems featured standardized programming interfaces and peripheral interconnects; third party development of hardware and software was encouraged, a significant departure from the norm of the time, which saw companies such as Amdahl and Hitachi going to court for the right to sell systems and peripherals that were compatible with IBM's mainframes. The definition of "open system" can be said to have become more formalized in the 1990s with the emergence of independently administered software standards such as The Open Group 's Single UNIX Specification . Although computer users today are used to a high degree of both hardware and software interoperability, in the 20th century the open systems concept could be promoted by Unix vendors as a significant differentiator. IBM and other companies resisted the trend for decades, exemplified by a now-famous warning in 1991 by an IBM account executive that one should be "careful about getting locked into open systems". [ 1 ] However, in the first part of the 21st century many of these same legacy system vendors, particularly IBM and Hewlett-Packard , began to adopt Linux as part of their overall sales strategy, with " open source " marketed as trumping "open system". Consequently, an IBM mainframe with Linux on IBM Z is marketed as being more of an open system than commodity computers using closed-source Microsoft Windows —or even those using Unix, despite its open systems heritage. In response, more companies are opening the source code to their products, with a notable example being Sun Microsystems and their creation of the OpenOffice.org and OpenSolaris projects, based on their formerly closed-source StarOffice and Solaris software products.
https://en.wikipedia.org/wiki/Open_system_(computing)
Open System Tribology is a field of tribology that studies tribological systems that are exposed to and affected by the natural environment . [ 1 ] Factors influencing the tribological process will vary with the operating environment. This environment may be closed or open. Closed systems (e.g., gears in a gearbox ) are theoretically not affected by weather conditions. On the other hand, open systems are affected by weather conditions (i.e., precipitation, temperature, and humidity). For example, weather conditions will strongly influence the tribosystem formed in a ski -trail contact, and ski preparation specialists need to do a thorough work before a ski race. Another example is that of tire –road and wheel- rail contacts that are exposed to the external environment. Here, artificial and natural contaminants will exert an influence on friction and wear. Sound and airborne particles from the contacting surfaces are not contained and emit to the surrounding air. Tribology at the wheel-rail contact plays a key role in railway performance. [ 1 ] Friction controls the tracking and braking , while wear affects reliability and endurance. Temperature influences the tribological process by affecting the properties of the contacting surfaces. Polymers, for example, are harder at low temperatures than at room temperature. [ 2 ]
https://en.wikipedia.org/wiki/Open_system_tribology
In structural engineering , the open web steel joist (OWSJ) is a lightweight steel truss consisting, in the standard form, of parallel chords and a triangulated web system, proportioned to span between bearing points. The main function of an OWSJ is to provide direct support for roof or floor deck and to transfer the load imposed on the deck to the structural frame i.e. beam and column . In order to accurately design an OWSJ, engineers consider the joist span between bearing points, joist spacing, slope, live loads , dead loads , collateral loads, seismic loads, wind uplift, deflection criteria and maximum joist depth allowed. Many steel joist manufacturers supply economical load tables in order to allow designers to select the most efficient joist sizes for their projects. While OWSJs can be adapted to suit a wide variety of architectural applications, the greatest economy will be realized when utilizing standard details, which may vary from one joist manufacturer to another. Some other shapes, in addition to the parallel top and bottom chord, are single slope, double slope, arch, gable and scissor configurations. These shapes may not be available from all joist manufacturers, and are usually supplied at a premium cost that reflects the complexity required. The manufacture of OWSJ in North America is overseen by the Steel Joist Institute (SJI). The SJI has worked since 1928 to maintain sound engineering practice throughout the industry. As a non-profit organization of active manufacturers, the Institute cooperates with governmental and business agencies to establish steel joist standards. Continuing research and updating are included in this work. [ 1 ] Load tables and specifications are published by the SJI in five categories: K-Series, LH-Series, DLH-Series, CJ-Series, and Joist Girders. Load tables are available in both Allowable Stress Design (ASD) and Load and Resistance Factor Design (LRFD). The first joist in 1923 was a Warren truss type, with top and bottom chords of round bars and a web formed from a single continuous bent bar. Various other types were developed, but problems also followed because each manufacturer had their own design and fabrication standards. Architects, engineers and builders found it difficult to compare rated capacities and to use fully the economies of steel joist construction. Members of the industry began to organize the institute, and in 1928 the first standard specifications were adopted, followed in 1929 by the first load table. The joists covered by these early standards were later identified as open web steel joists, SJ-Series. [ 1 ] Open Web Steel Joists, K-Series, were primarily developed to provide structural support for floors and roofs of buildings. They possess multiple advantages and features which have resulted in their wide use and acceptance throughout the United States and other countries. K-Series Joists are standardized regarding depths, spans, and load-carrying capacities. There are 63 separate designations in the Load Tables, representing joist depths from 10 inches (250 mm) through 30 inches (760 mm) in 2 inches (51 mm) increments and spans through 60 feet (18,000 mm). Standard K-Series Joists have a 2 + 1 ⁄ 2 inches (64 mm) end bearing depth so that, regardless of the overall joist depths, the tops of the joists lie in the same plane. Seat depths deeper than 2 + 1 ⁄ 2 inches (64 mm) can also be specified. Standard K-Series Joists are designed for simple span uniform loading which results in a parabolic moment diagram for chord forces and a linearly sloped shear diagram for web forces. When non-uniform and/or concentrated loads are encountered the shear and moment diagrams required may be shaped quite differently and may not be covered by the shear and moment design envelopes of a standard K-Series Joist. When conditions such as this arise, a KCS joist may be a good option. [ 1 ] KCS (K-Series Constant Shear) joists are designed in accordance with the Standard Specification for K-Series Joists. KCS joist chords are designed for a flat positive moment envelope. The moment capacity is constant at all interior panels. All webs are designed for a vertical shear equal to the specified shear capacity and interior webs will be designed for 100% stress reversal. [ 1 ] Longspan (LH) and Deep Longspan (DLH) Steel Joists are relatively light weight shop-manufactured steel trusses used in the direct support of floor or roof slabs or decks between walls, beams, and main structural members. The LH- and DLH-Series have been designed for the purpose of extending the use of joists to spans and loads in excess of those covered by Open Web Steel Joists, K-Series. LH-Series Joists have been standardized in depths from 18 inches (460 mm) through 48 inches (1,200 mm), for spans through 96 feet (29,000 mm). DLH-Series Joists have been standardized in depths from 52 inches (1,300 mm) through 120 inches (3,000 mm), for spans up through 240 feet (73,000 mm). Longspan and Deep Longspan Steel Joists can be furnished with either underslung or square ends, with parallel chords or with single or double pitched top chords to provide sufficient slope for roof drainage. Square end joists are primarily intended for bottom chord bearing. The depth of the bearing seat at the ends of underslung LH- and DLH-Series Joists have been established at 5 inches (130 mm) for chord section number 2 through 17. A bearing seat depth of 7 + 1 ⁄ 2 inches (190 mm) has been established for the DLH Series chord section number 18 through 25. [ 1 ] Open Web Composite Steel Joists, CJ-Series, were developed to provide structural support for floors and roofs which incorporate an overlying concrete slab while also allowing the steel joist and slab to act together as an integral unit after the concrete has adequately cured. The CJ-Series Joists are capable of supporting larger floor or roof loadings due to the attachment of the concrete slab to the top chord of the composite joist. Shear connection between the concrete slab and steel joist is typically made by the welding of shear studs through the steel deck to the underlying CJ-Series Composite Steel Joist. [ 2 ] Joist Girders are open web steel trusses used as primary framing members. They are designed as simple spans supporting equally spaced concentrated loads for a floor or roof system. These concentrated loads are considered to act at the panel points of the Joist Girders. These members have been standardized for depths from 20 to 120 inches (510 to 3,050 mm), and spans to 120 feet (37,000 mm). The standard depth at the bearing ends has been established at 7 + 1 ⁄ 2 inches (190 mm) for all Joist Girders. Joist Girders are usually attached to the columns by bolting with two 3 ⁄ 4 inch (19 mm) diameter A325 bolts. [ 1 ]
https://en.wikipedia.org/wiki/Open_web_steel_joist
Operability is the ability to keep a piece of equipment, a system or a whole industrial installation in a safe and reliable functioning condition, according to pre-defined operational requirements. In a computing systems environment with multiple systems this includes the ability of products, systems and business processes to work together to accomplish a common task such as finding and returning availability of inventory for flight. For a gas turbine engine , operability addresses the installed aerodynamic operation of the engine [ 1 ] to ensure that it operates with care-free throttle handling without compressor stall or surge or combustor flame-out. There must be no unacceptable loss of power or handling deterioration after ingesting birds, rain and hail or ingesting or accumulating ice. Design and development responsibilities include the components through which the thrust/power-producing flow passes, i.e. the intake, compressor, combustor, fuel system, turbine and exhaust. They also include the software in the computers which control the way the engine changes its speed in response to the actions of the pilot in selecting a start, selecting different idle settings and higher power ratings such as take-off, climb and cruise. The engine has to start to idle and accelerate and decelerate within agreed, or mandated, times while remaining within operating limits (shaft speeds, turbine temperature, combustor casing pressure) over the required aircraft operating envelope. Operability is considered one of the ilities and is closely related to reliability , supportability and maintainability . Operability also refers to whether or not a surgical operation can be performed to treat a patient with a reasonable degree of safety and chance of success. This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operability
In mathematics , an operad is a structure that consists of abstract operations , each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad O {\displaystyle O} , one defines an algebra over O {\displaystyle O} to be a set together with concrete operations on this set which behave just like the abstract operations of O {\displaystyle O} . For instance, there is a Lie operad L {\displaystyle L} such that the algebras over L {\displaystyle L} are precisely the Lie algebras ; in a sense L {\displaystyle L} abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations . Operads originate in algebraic topology ; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 [ 1 ] [ 2 ] and by J. Peter May in 1972. [ 3 ] Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads: [ 4 ] The word "operad" was created by May as a portmanteau of "operations" and " monad " (and also because his mother was an opera singer). [ 5 ] Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich , Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. [ 6 ] [ 7 ] Operads have since found many applications, such as in deformation quantization of Poisson manifolds , the Deligne conjecture , [ 8 ] or graph homology in the work of Maxim Kontsevich and Thomas Willwacher . Suppose X {\displaystyle X} is a set and for n ∈ N {\displaystyle n\in \mathbb {N} } we define the set of all functions from the cartesian product of n {\displaystyle n} copies of X {\displaystyle X} to X {\displaystyle X} . We can compose these functions: given f ∈ P ( n ) {\displaystyle f\in P(n)} , f 1 ∈ P ( k 1 ) , … , f n ∈ P ( k n ) {\displaystyle f_{1}\in P(k_{1}),\ldots ,f_{n}\in P(k_{n})} , the function is defined as follows: given k 1 + ⋯ + k n {\displaystyle k_{1}+\cdots +k_{n}} arguments from X {\displaystyle X} , we divide them into n {\displaystyle n} blocks, the first one having k 1 {\displaystyle k_{1}} arguments, the second one k 2 {\displaystyle k_{2}} arguments, etc., and then apply f 1 {\displaystyle f_{1}} to the first block, f 2 {\displaystyle f_{2}} to the second block, etc. We then apply f {\displaystyle f} to the list of n {\displaystyle n} values obtained from X {\displaystyle X} in such a way. We can also permute arguments, i.e. we have a right action ∗ {\displaystyle *} of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} , defined by for f ∈ P ( n ) {\displaystyle f\in P(n)} , s ∈ S n {\displaystyle s\in S_{n}} and x 1 , … , x n ∈ X {\displaystyle x_{1},\ldots ,x_{n}\in X} . The definition of a symmetric operad given below captures the essential properties of these two operations ∘ {\displaystyle \circ } and ∗ {\displaystyle *} . A non-symmetric operad (sometimes called an operad without permutations , or a non- Σ {\displaystyle \Sigma } or plain operad) consists of the following: satisfying the following coherence axioms: A symmetric operad (often just called operad ) is a non-symmetric operad P {\displaystyle P} as above, together with a right action of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} for n ∈ N {\displaystyle n\in \mathbb {N} } , denoted by ∗ {\displaystyle *} and satisfying The permutation actions in this definition are vital to most applications, including the original application to loop spaces. A morphism of operads f : P → Q {\displaystyle f:P\to Q} consists of a sequence that: Operads therefore form a category denoted by O p e r {\displaystyle {\mathsf {Oper}}} . So far operads have only been considered in the category of sets. More generally, it is possible to define operads in any symmetric monoidal category C . In that case, each P ( n ) {\displaystyle P(n)} is an object of C , the composition ∘ {\displaystyle \circ } is a morphism P ( n ) ⊗ P ( k 1 ) ⊗ ⋯ ⊗ P ( k n ) → P ( k 1 + ⋯ + k n ) {\displaystyle P(n)\otimes P(k_{1})\otimes \cdots \otimes P(k_{n})\to P(k_{1}+\cdots +k_{n})} in C (where ⊗ {\displaystyle \otimes } denotes the tensor product of the monoidal category), and the actions of the symmetric group elements are given by isomorphisms in C . A common example is the category of topological spaces and continuous maps, with the monoidal product given by the cartesian product . In this case, an operad is given by a sequence of spaces (instead of sets) { P ( n ) } n ≥ 0 {\displaystyle \{P(n)\}_{n\geq 0}} . The structure maps of the operad (the composition and the actions of the symmetric groups) are then assumed to be continuous. The result is called a topological operad . Similarly, in the definition of a morphism of operads, it would be necessary to assume that the maps involved are continuous. Other common settings to define operads include, for example, modules over a commutative ring , chain complexes , groupoids (or even the category of categories itself), coalgebras , etc. Given a commutative ring R we consider the category R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} of modules over R . An operad over R can be defined as a monoid object ( T , γ , η ) {\displaystyle (T,\gamma ,\eta )} in the monoidal category of endofunctors on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} (it is a monad ) satisfying some finiteness condition. [ note 1 ] For example, a monoid object in the category of "polynomial endofunctors" on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} is an operad. [ 8 ] Similarly, a symmetric operad can be defined as a monoid object in the category of S {\displaystyle \mathbb {S} } -objects , where S {\displaystyle \mathbb {S} } means a symmetric group. [ 9 ] A monoid object in the category of combinatorial species is an operad in finite sets. An operad in the above sense is sometimes thought of as a generalized ring . For example, Nikolai Durov defines his generalized rings as monoid objects in the monoidal category of endofunctors on Set {\displaystyle {\textbf {Set}}} that commute with filtered colimits. [ 10 ] This is a generalization of a ring since each ordinary ring R defines a monad Σ R : Set → Set {\displaystyle \Sigma _{R}:{\textbf {Set}}\to {\textbf {Set}}} that sends a set X to the underlying set of the free R -module R ( X ) {\displaystyle R^{(X)}} generated by X . "Associativity" means that composition of operations is associative (the function ∘ {\displaystyle \circ } is associative), analogous to the axiom in category theory that f ∘ ( g ∘ h ) = ( f ∘ g ) ∘ h {\displaystyle f\circ (g\circ h)=(f\circ g)\circ h} ; it does not mean that the operations themselves are associative as operations. Compare with the associative operad , below. Associativity in operad theory means that expressions can be written involving operations without ambiguity from the omitted compositions, just as associativity for operations allows products to be written without ambiguity from the omitted parentheses. For instance, if θ {\displaystyle \theta } is a binary operation, which is written as θ ( a , b ) {\displaystyle \theta (a,b)} or ( a b ) {\displaystyle (ab)} . So that θ {\displaystyle \theta } may or may not be associative. Then what is commonly written ( ( a b ) c ) {\displaystyle ((ab)c)} is unambiguously written operadically as θ ∘ ( θ , 1 ) {\displaystyle \theta \circ (\theta ,1)} . This sends ( a , b , c ) {\displaystyle (a,b,c)} to ( a b , c ) {\displaystyle (ab,c)} (apply θ {\displaystyle \theta } on the first two, and the identity on the third), and then the θ {\displaystyle \theta } on the left "multiplies" a b {\displaystyle ab} by c {\displaystyle c} . This is clearer when depicted as a tree: which yields a 3-ary operation: However, the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is a priori ambiguous: it could mean θ ∘ ( ( θ , 1 ) ∘ ( ( θ , 1 ) , 1 ) ) {\displaystyle \theta \circ ((\theta ,1)\circ ((\theta ,1),1))} , if the inner compositions are performed first, or it could mean ( θ ∘ ( θ , 1 ) ) ∘ ( ( θ , 1 ) , 1 ) {\displaystyle (\theta \circ (\theta ,1))\circ ((\theta ,1),1)} , if the outer compositions are performed first (operations are read from right to left). Writing x = θ , y = ( θ , 1 ) , z = ( ( θ , 1 ) , 1 ) {\displaystyle x=\theta ,y=(\theta ,1),z=((\theta ,1),1)} , this is x ∘ ( y ∘ z ) {\displaystyle x\circ (y\circ z)} versus ( x ∘ y ) ∘ z {\displaystyle (x\circ y)\circ z} . That is, the tree is missing "vertical parentheses": If the top two rows of operations are composed first (puts an upward parenthesis at the ( a b ) c d {\displaystyle (ab)c\ \ d} line; does the inner composition first), the following results: which then evaluates unambiguously to yield a 4-ary operation. As an annotated expression: If the bottom two rows of operations are composed first (puts a downward parenthesis at the a b c d {\displaystyle ab\quad c\ \ d} line; does the outer composition first), following results: which then evaluates unambiguously to yield a 4-ary operation: The operad axiom of associativity is that these yield the same result , and thus that the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is unambiguous. The identity axiom (for a binary operation) can be visualized in a tree as: meaning that the three operations obtained are equal: pre- or post- composing with the identity makes no difference. As for categories, 1 ∘ 1 = 1 {\displaystyle 1\circ 1=1} is a corollary of the identity axiom. The most basic operads are the ones given in the section on "Intuition", above. For any set X {\displaystyle X} , we obtain the endomorphism operad E n d X {\displaystyle {\mathcal {End}}_{X}} consisting of all functions X n → X {\displaystyle X^{n}\to X} . These operads are important because they serve to define operad algebras . If O {\displaystyle {\mathcal {O}}} is an operad, an operad algebra over O {\displaystyle {\mathcal {O}}} is given by a set X {\displaystyle X} and an operad morphism O → E n d X {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{X}} . Intuitively, such a morphism turns each "abstract" operation of O ( n ) {\displaystyle {\mathcal {O}}(n)} into a "concrete" n {\displaystyle n} -ary operation on the set X {\displaystyle X} . An operad algebra over O {\displaystyle {\mathcal {O}}} thus consists of a set X {\displaystyle X} together with concrete operations on X {\displaystyle X} that follow the rules abstractely specified by the operad O {\displaystyle {\mathcal {O}}} . If k is a field , we can consider the category of finite-dimensional vector spaces over k ; this becomes a monoidal category using the ordinary tensor product over k. We can then define endomorphism operads in this category, as follows. Let V be a finite-dimensional vector space The endomorphism operad E n d V = { E n d V ( n ) } {\displaystyle {\mathcal {End}}_{V}=\{{\mathcal {End}}_{V}(n)\}} of V consists of [ 11 ] If O {\displaystyle {\mathcal {O}}} is an operad, a k -linear operad algebra over O {\displaystyle {\mathcal {O}}} is given by a finite-dimensional vector space V over k and an operad morphism O → E n d V {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{V}} ; this amounts to specifying concrete multilinear operations on V that behave like the operations of O {\displaystyle {\mathcal {O}}} . (Notice the analogy between operads&operad algebras and rings&modules: a module over a ring R is given by an abelian group M together with a ring homomorphism R → End ⁡ ( M ) {\displaystyle R\to \operatorname {End} (M)} .) Depending on applications, variations of the above are possible: for example, in algebraic topology, instead of vector spaces and tensor products between them, one uses (reasonable) topological spaces and cartesian products between them. The little 2-disks operad is a topological operad where P ( n ) {\displaystyle P(n)} consists of ordered lists of n disjoint disks inside the unit disk of R 2 {\displaystyle \mathbb {R} ^{2}} centered at the origin. The symmetric group acts on such configurations by permuting the list of little disks. The operadic composition for little disks is illustrated in the accompanying figure to the right, where an element θ ∈ P ( 3 ) {\displaystyle \theta \in P(3)} is composed with an element ( θ 1 , θ 2 , θ 3 ) ∈ P ( 2 ) × P ( 3 ) × P ( 4 ) {\displaystyle (\theta _{1},\theta _{2},\theta _{3})\in P(2)\times P(3)\times P(4)} to yield the element θ ∘ ( θ 1 , θ 2 , θ 3 ) ∈ P ( 9 ) {\displaystyle \theta \circ (\theta _{1},\theta _{2},\theta _{3})\in P(9)} obtained by shrinking the configuration of θ i {\displaystyle \theta _{i}} and inserting it into the i- th disk of θ {\displaystyle \theta } , for i = 1 , 2 , 3 {\displaystyle i=1,2,3} . Analogously, one can define the little n-disks operad by considering configurations of disjoint n -balls inside the unit ball of R n {\displaystyle \mathbb {R} ^{n}} . [ 12 ] Originally the little n-cubes operad or the little intervals operad (initially called little n -cubes PROPs ) was defined by Michael Boardman and Rainer Vogt in a similar way, in terms of configurations of disjoint axis-aligned n -dimensional hypercubes (n-dimensional intervals ) inside the unit hypercube . [ 13 ] Later it was generalized by May [ 14 ] to the little convex bodies operad , and "little disks" is a case of "folklore" derived from the "little convex bodies". [ 15 ] In graph theory, rooted trees form a natural operad. Here, P ( n ) {\displaystyle P(n)} is the set of all rooted trees with n leaves, where the leaves are numbered from 1 to n. The group S n {\displaystyle S_{n}} operates on this set by permuting the leaf labels. Operadic composition T ∘ ( S 1 , … , S n ) {\displaystyle T\circ (S_{1},\ldots ,S_{n})} is given by replacing the i -th leaf of T {\displaystyle T} by the root of the i -th tree S i {\displaystyle S_{i}} , for i = 1 , … , n {\displaystyle i=1,\ldots ,n} , thus attaching the n trees to T {\displaystyle T} and forming a larger tree, whose root is taken to be the same as the root of T {\displaystyle T} and whose leaves are numbered in order. The Swiss-cheese operad is a two-colored [ definition needed ] topological operad defined in terms of configurations of disjoint n -dimensional disks inside a unit n -semidisk and n -dimensional semidisks, centered at the base of the unit semidisk and sitting inside of it. The operadic composition comes from gluing configurations of "little" disks inside the unit disk into the "little" disks in another unit semidisk and configurations of "little" disks and semidisks inside the unit semidisk into the other unit semidisk. The Swiss-cheese operad was defined by Alexander A. Voronov . [ 16 ] It was used by Maxim Kontsevich to formulate a Swiss-cheese version of Deligne's conjecture on Hochschild cohomology. [ 17 ] Kontsevich's conjecture was proven partly by Po Hu , Igor Kriz , and Alexander A. Voronov [ 18 ] and then fully by Justin Thomas . [ 19 ] Another class of examples of operads are those capturing the structures of algebraic structures, such as associative algebras, commutative algebras and Lie algebras. Each of these can be exhibited as a finitely presented operad, in each of these three generated by binary operations. For example, the associative operad is a symmetric operad generated by a binary operation ψ {\displaystyle \psi } , subject only to the condition that This condition corresponds to associativity of the binary operation ψ {\displaystyle \psi } ; writing ψ ( a , b ) {\displaystyle \psi (a,b)} multiplicatively, the above condition is ( a b ) c = a ( b c ) {\displaystyle (ab)c=a(bc)} . This associativity of the operation should not be confused with associativity of composition which holds in any operad; see the axiom of associativity , above. In the associative operad, each P ( n ) {\displaystyle P(n)} is given by the symmetric group S n {\displaystyle S_{n}} , on which S n {\displaystyle S_{n}} acts by right multiplication. The composite σ ∘ ( τ 1 , … , τ n ) {\displaystyle \sigma \circ (\tau _{1},\dots ,\tau _{n})} permutes its inputs in blocks according to σ {\displaystyle \sigma } , and within blocks according to the appropriate τ i {\displaystyle \tau _{i}} . The algebras over the associative operad are precisely the semigroups : sets together with a single binary associative operation. The k -linear algebras over the associative operad are precisely the associative k- algebras . The terminal symmetric operad is the operad which has a single n -ary operation for each n , with each S n {\displaystyle S_{n}} acting trivially. The algebras over this operad are the commutative semigroups; the k -linear algebras are the commutative associative k -algebras. Similarly, there is a non- Σ {\displaystyle \Sigma } operad for which each P ( n ) {\displaystyle P(n)} is given by the Artin braid group B n {\displaystyle B_{n}} . Moreover, this non- Σ {\displaystyle \Sigma } operad has the structure of a braided operad, which generalizes the notion of an operad from symmetric to braid groups. In linear algebra , real vector spaces can be considered to be algebras over the operad R ∞ {\displaystyle \mathbb {R} ^{\infty }} of all linear combinations [ citation needed ] . This operad is defined by R ∞ ( n ) = R n {\displaystyle \mathbb {R} ^{\infty }(n)=\mathbb {R} ^{n}} for n ∈ N {\displaystyle n\in \mathbb {N} } , with the obvious action of S n {\displaystyle S_{n}} permuting components, and composition x → ∘ ( y 1 → , … , y n → ) {\displaystyle {\vec {x}}\circ ({\vec {y_{1}}},\ldots ,{\vec {y_{n}}})} given by the concatentation of the vectors x ( 1 ) y 1 → , … , x ( n ) y n → {\displaystyle x^{(1)}{\vec {y_{1}}},\ldots ,x^{(n)}{\vec {y_{n}}}} , where x → = ( x ( 1 ) , … , x ( n ) ) ∈ R n {\displaystyle {\vec {x}}=(x^{(1)},\ldots ,x^{(n)})\in \mathbb {R} ^{n}} . The vector x → = ( 2 , 3 , − 5 , 0 , … ) {\displaystyle {\vec {x}}=(2,3,-5,0,\dots )} for instance represents the operation of forming a linear combination with coefficients 2,3,-5,0,... This point of view formalizes the notion that linear combinations are the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of vector addition and scalar multiplication are a generating set for the operad of all linear combinations, while the linear combinations operad canonically encodes all possible operations on a vector space. Similarly, affine combinations , conical combinations , and convex combinations can be considered to correspond to the sub-operads where the terms of the vector x → {\displaystyle {\vec {x}}} sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by R n {\displaystyle \mathbb {R} ^{n}} being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories. The commutative-ring operad is an operad whose algebras are the commutative rings. It is defined by P ( n ) = Z [ x 1 , … , x n ] {\displaystyle P(n)=\mathbb {Z} [x_{1},\ldots ,x_{n}]} , with the obvious action of S n {\displaystyle S_{n}} and operadic composition given by substituting polynomials (with renumbered variables) for variables. A similar operad can be defined whose algebras are the associative, commutative algebras over some fixed base field. The Koszul-dual of this operad is the Lie operad (whose algebras are the Lie algebras), and vice versa. Typical algebraic constructions (e.g., free algebra construction) can be extended to operads. Let S e t S n {\displaystyle \mathbf {Set} ^{S_{n}}} denote the category whose objects are sets on which the group S n {\displaystyle S_{n}} acts. Then there is a forgetful functor O p e r → ∏ n ∈ N S e t S n {\displaystyle {\mathsf {Oper}}\to \prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}} , which simply forgets the operadic composition. It is possible to construct a left adjoint Γ : ∏ n ∈ N S e t S n → O p e r {\displaystyle \Gamma :\prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}\to {\mathsf {Oper}}} to this forgetful functor (this is the usual definition of free functor ). Given a collection of operations E , Γ ( E ) {\displaystyle \Gamma (E)} is the free operad on E. Like a group or a ring, the free construction allows to express an operad in terms of generators and relations. By a free representation of an operad O {\displaystyle {\mathcal {O}}} , we mean writing O {\displaystyle {\mathcal {O}}} as a quotient of a free operad F = Γ ( E ) {\displaystyle {\mathcal {F}}=\Gamma (E)} where E describes generators of O {\displaystyle {\mathcal {O}}} and the kernel of the epimorphism F → O {\displaystyle {\mathcal {F}}\to {\mathcal {O}}} describes the relations. A (symmetric) operad O = { O ( n ) } {\displaystyle {\mathcal {O}}=\{{\mathcal {O}}(n)\}} is called quadratic if it has a free presentation such that E = O ( 2 ) {\displaystyle E={\mathcal {O}}(2)} is the generator and the relation is contained in Γ ( E ) ( 3 ) {\displaystyle \Gamma (E)(3)} . [ 20 ] Clones are the special case of operads that are also closed under identifying arguments together ("reusing" some data). Clones can be equivalently defined as operads that are also a minion (or clonoid ). In Stasheff (2004) , Stasheff writes:
https://en.wikipedia.org/wiki/Operad
In algebra, an operad algebra is an "algebra" over an operad . It is a generalization of an associative algebra over a commutative ring R , with an operad replacing R . Given an operad O (say, a symmetric sequence in a symmetric monoidal ∞-category C ), an algebra over an operad , or O -algebra for short, is, roughly, a left module over O with multiplications parametrized by O . If O is a topological operad , then one can say an algebra over an operad is an O -monoid object in C . If C is symmetric monoidal, this recovers the usual definition. Let C be symmetric monoidal ∞-category with monoidal structure distributive over colimits. If f : O → O ′ {\displaystyle f:O\to O'} is a map of operads and, moreover, if f is a homotopy equivalence, then the ∞-category of algebras over O in C is equivalent to the ∞-category of algebras over O' in C . [ 1 ] This abstract algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operad_algebra
In mathematics , an operand is the object of a mathematical operation , i.e., it is the object or quantity that is operated on. [ 1 ] Unknown operands in equalities of expressions can be found by equation solving. The following arithmetic expression shows an example of operators and operands: In the above example, '+' is the symbol for the operation called addition . The operand '3' is one of the inputs (quantities) followed by the addition operator , and the operand '6' is the other input necessary for the operation. The result of the operation is 9. (The number '9' is also called the sum of the augend 3 and the addend 6.) An operand, then, is also referred to as "one of the inputs (quantities) for an operation". Operands may be nested, and may consist of expressions also made up of operators with operands. In the above expression '(3 + 5)' is the first operand for the multiplication operator and '2' the second. The operand '(3 + 5)' is an expression in itself, which contains an addition operator, with the operands '3' and '5'. Rules of precedence affect which values form operands for which operators: [ 2 ] In the above expression, the multiplication operator has the higher precedence than the addition operator, so the multiplication operator has operands of '5' and '2'. The addition operator has operands of '3' and '5 × 2'. Depending on the mathematical notation being used the position of an operator in relation to its operand(s) may vary. In everyday usage infix notation is the most common, [ 3 ] however other notations also exist, such as the prefix and postfix notations. These alternate notations are most common within computer science . Below is a comparison of three different notations — all represent an addition of the numbers '1' and '2' In a mathematical expression, the order of operation is carried out from left to right. Start with the leftmost value and seek the first operation to be carried out in accordance with the order specified above (i.e., start with parentheses and end with the addition/subtraction group). For example, in the expression the first operation to be acted upon is any and all expressions found inside a parenthesis. So beginning at the left and moving to the right, find the first (and in this case, the only) parenthesis, that is, (2 + 2 2 ). Within the parenthesis itself is found the expression 2 2 . The reader is required to find the value of 2 2 before going any further. The value of 2 2 is 4. Having found this value, the remaining expression looks like this: The next step is to calculate the value of expression inside the parenthesis itself, that is, (2 + 4) = 6. Our expression now looks like this: Having calculated the parenthetical part of the expression, we start over again beginning with the left most value and move right. The next order of operation (according to the rules) is exponents. Start at the left most value, that is, 4, and scan your eyes to the right and search for the first exponent you come across. The first (and only) expression we come across that is expressed with an exponent is 2 2 . We find the value of 2 2 , which is 4. What we have left is the expression The next order of operation is multiplication. 4 × 4 is 16. Now our expression looks like this: The next order of operation according to the rules is division. However, there is no division operator sign (÷) in the expression, 16 − 6. So we move on to the next order of operation, i.e., addition and subtraction, which have the same precedence and are done left to right. So the correct value for our original expression, 4 × 2 2 − (2 + 2 2 ), is 10. It is important to carry out the order of operation in accordance with rules set by convention. If the reader evaluates an expression but does not follow the correct order of operation, the reader will come forth with a different value. The different value will be the incorrect value because the order of operation was not followed. The reader will arrive at the correct value for the expression if and only if each operation is carried out in the proper order. The number of operands of an operator is called its arity . [ 4 ] Based on arity, operators are chiefly classified as nullary (no operands), unary (1 operand), binary (2 operands), ternary (3 operands). Higher arities are less frequently denominated through a specific terms, all the more when function composition or currying can be used to avoid them. Other terms include: In computer programming languages , the definitions of operator and operand are almost the same as in mathematics. In computing, an operand is the part of a computer instruction which specifies what data is to be manipulated or operated on, while at the same time representing the data itself. [ 5 ] A computer instruction describes an operation such as add or multiply X, while the operand (or operands, as there can be more than one) specify on which X to operate as well as the value of X. Additionally, in assembly language , an operand is a value (an argument) on which the instruction , named by mnemonic , operates. The operand may be a processor register , a memory address , a literal constant, or a label. A simple example (in the x86 architecture) is where the value in register operand AX is to be moved ( MOV ) into register BX . Depending on the instruction , there may be zero, one, two, or more operands.
https://en.wikipedia.org/wiki/Operand
Operando spectroscopy is an analytical methodology wherein the spectroscopic characterization of materials undergoing reaction is coupled simultaneously with measurement of catalytic activity and selectivity. [ 1 ] The primary concern of this methodology is to establish structure-reactivity/selectivity relationships of catalysts and thereby yield information about mechanisms . Other uses include those in engineering improvements to existing catalytic materials and processes and in developing new ones. [ 2 ] In the context of organometallic catalysis, an in situ reaction involves the real-time measurement of a catalytic process using techniques such as mass spectrometry , NMR , infrared spectroscopy , and gas chromatography to help gain insight into functionality of the catalyst. Approximately 90% of industrial precursor chemicals are synthesized using catalysts. [ 3 ] Understanding the catalytic mechanism and active site is crucial to creating catalysts with optimal efficiency and maximal product yield. In situ reactor cell designs typically are incapable of pressure and temperature consistency required for true catalytic reaction studies, making these cells insufficient. Several spectroscopic techniques require liquid helium temperatures, making them inappropriate for real-world studies of catalytic processes. [ 1 ] Therefore, the operando reaction method must involve in situ spectroscopic measurement techniques, but under true catalytic kinetic conditions. [ 1 ] Operando (Latin for working ) [ 4 ] spectroscopy refers to continuous spectra collection of a working catalyst, allowing for simultaneous evaluation of both structure and activity/selectivity of the catalyst. The term operando first appeared in catalytic literature in 2002. [ 1 ] It was coined by Miguel A. Bañares, who sought to name the methodology in a way that captured the idea of observing a functional material — in this case a catalyst — under actual working , i.e. device operation, conditions. The first international congress on operando spectroscopy took place in Lunteren, Netherlands, in March 2003, [ 3 ] followed by further conferences in 2006 (Toledo, Spain), [ 5 ] 2009 (Rostock, Germany), 2012 (Brookhaven, USA), and 2015 (Deauville, France). [ 6 ] The name change from in situ to operando for the research field of spectroscopy of catalysts under working conditions was proposed at the Lunteren congress. [ 3 ] The analytical principle of measuring the structure, property and function of a material, a component disassembled or as part of a device simultaneously under operation conditions is not restricted to catalysis and catalysts. Batteries and fuel cells have been subject to operando studies with respect to their electrochemical function. Operando spectroscopy is a class of methodology, rather than a specific spectroscopic technique such as FTIR or NMR. Operando spectroscopy is a logical technological progress in situ studies. Catalyst scientists would ideally like to have a "motion picture" of each catalytic cycle, whereby the precise bond-making or bond-breaking events taking place at the active site are known; [ 7 ] this would allow a visual model of the mechanism to be constructed. The ultimate goal is to determine the structure-activity relationship of the substrate-catalyst species of the same reaction. Having two experiments—the performing of a reaction plus the real-time spectral acquisition of the reaction mixture—on a single reaction facilitates a direct link between the structures of the catalyst and intermediates, and of the catalytic activity/selectivity. Although monitoring a catalytic process in situ can provide information relevant to catalytic function, it is difficult to establish a perfect correlation because of the current physical limitations of in situ reactor cells. Complications arise, for example, for gas phase reactions which require large void volumes, which make it difficult to homogenize heat and mass within the cell. [ 1 ] The crux of a successful operando methodology, therefore, is related to the disparity between laboratory setups and industrial setups, i.e., the limitations of properly simulating the catalytic system as it proceeds in industry. The purpose of operando spectroscopy is to measure the catalytic changes that occur within the reactor during operation using time-resolved (and sometimes spatially-resolved) spectroscopy. [ 7 ] Time-resolved spectroscopy theoretically monitor the formation and disappearance of intermediate species at the active site of the catalyst as bond are made and broken in real time. However, current operando instrumentation often only works in the second or subsecond time scale and therefore, only relative concentrations of intermediates can be assessed. [ 7 ] Spatially resolved spectroscopy combines spectroscopy with microscopy to determine active sites of the catalyst studied and spectator species present in the reaction. [ 7 ] Operando spectroscopy requires measurement of the catalyst under (ideally) real working conditions , involving comparable temperature and pressure environments to those of industrially catalyzed reactions, but with a spectroscopic device inserted into the reaction vessel. The parameters of the reaction are then measured continuously during the reaction using the appropriate instrumentation, i.e., online mass spectrometry , gas chromatography or IR/NMR spectroscopy. [ 7 ] Operando instruments (in situ cells) must ideally allow for spectroscopic measurement under optimal reaction conditions. [ 8 ] Most industrial catalysis reactions require excessive pressure and temperature conditions which subsequently degrades the quality of the spectra by lowering the resolution of signals. Currently many complications of this technique arise due to the reaction parameters and the cell design. The catalyst may interact with the components of the operando apparatus; open space in the cell can have an effect on the absorption spectra, and the presence of spectator species in the reaction may complicate analysis of the spectra. Continuing development of operando reaction-cell design is in line with working towards minimizing the need for compromise between optimal catalysis conditions and spectroscopy. [ 9 ] [ 10 ] These reactors must handle specific temperature and pressure requirements while still providing access for spectrometry. Other requirements considered when designing operando experiments include reagent and product flow rates, catalyst position, beam paths, and window positions and sizes. All of these factors must also be accounted for while designing operando experiments, as the spectroscopic techniques used may alter the reaction conditions. An example of this was reported by Tinnemans et al., which noted that local heating by a Raman laser can give spot temperatures exceeding 100 °C. [ 11 ] Also, Meunier reports that when using DRIFTS, there is a noticeable temperature difference (on the order of hundreds of degrees) between the crucible core and the exposed surface of the catalyst due to losses caused by the IR-transparent windows necessary for analysis. [ 10 ] Raman spectroscopy is one of the easiest methods to integrate into a heterogeneous operando experiment, as these reactions typically occur in the gas phase, so there is very low litter interference and good data can be obtained for the species on the catalytic surface. [ clarification needed ] In order to use Raman, all that is required is to insert a small probe containing two optical fibers for excitation and detection. [ 7 ] Pressure and heat complications are essentially negligible, due to the nature of the probe. Operando confocal Raman micro-spectroscopy has been applied to the study of fuel cell catalytic layers with flowing reactant streams and controlled temperature. [ 12 ] Operando UV-vis spectroscopy is particularly useful for many homogeneous catalytic reactions because organometallic species are often colored. Fiber-optical sensors allow monitoring of the consumption of reactants and production of product within the solution through absorption spectra. Gas consumption as well as pH and electrical conductivity can also be measured using fiber-optic sensors within an operando apparatus. [ 13 ] One case study investigated the formation of gaseous intermediates in the decomposition of CCl 4 in the presence of steam over La 2 O 3 using Fourier-transform infrared spectroscopy . [ 14 ] This experiment produced useful information about the reaction mechanism, active site orientation, and about which species compete for the active site. A case study by Beale et al. involved preparation of iron phosphates and bismuth molybdate catalysts from an amorphous precursor gel. [ 15 ] The study found that there were no intermediate phases in the reaction, and helped to determine kinetic and structural information. The article uses the dated term in-situ , but the experiment uses, in essence, an operando method. Although x-ray diffraction does not count as a spectroscopy method, it is often being used as an operando method in various fields, including catalysis. X-ray spectroscopy methods can be used for genuine operando analyses of catalysts and other functional materials. The redox dynamics of sulfur with Ni/GDC [ clarification needed ] anode during solid oxide fuel cell (SOFC) operation at mid- and low-range temperatures in an operando S K-edge XANES have been studied. Ni is a typical catalyst material for the anode in high temperature SOFCs. [ 16 ] The operando spectro-electrochemical cell for this high temperature gas-solid reaction study under electrochemical conditions was based on a typical high temperature heterogeneous catalysis cell, which was further equipped with electric terminals. Very early method development for operando studies on PEM-FC fuel cells was done by Haubold et al. at Forschungszentrum Jülich and HASYLAB . Specifically they developed plexiglas spectro-electrochemical cells for XANES, EXAFS and SAXS and ASAXS studies with control of the electrochemical potential of the fuel cell . Under operation of the fuel cell they determined the change of the particle size of and oxidation state and shell formation of the platinum electrocatalyst . [ 17 ] In contrast to the SOFC operation conditions, this was a PEM-FC study in liquid environment under ambient temperature. The same operando method is applied to battery research and yields information on the changes of the oxidation state of electrochemically active elements in a cathode such as Mn via XANES, information on coronation shell and bond length via EXAFS, and information on microstructure changes during battery operation via ASAXS. [ 18 ] Since lithium-ion batteries are intercalation batteries, information on the chemistry and electronic structure going on in the bulk during operation are of interest. For this, soft x-ray information can be obtained using hard X-ray Raman scattering . [ 19 ] Fixed energy methods (FEXRAV) have been developed and applied to the study of the catalytic cycle for the oxygen evolution reaction on iridium oxide. FEXRAV consists of recording the absorption coefficient at a fixed energy while varying at will the electrode potential in an electrochemical cell during the course of an electrochemical reaction. It allows to obtain a rapid screening of several systems under different experimental conditions ( e.g. , nature of the electrolyte, potential window), preliminary to deeper XAS experiments. [ 20 ] The soft X-Ray regime ( i.e. with photon energy < 1000 eV) can be profitably used for investigating heterogeneous solid-gas reaction. In this case, it is proved that XAS can be sensitive both to the gas phase and to the solid surface states. [ 21 ] One case study monitored the dehydrogenation of propane to propene using micro-GC. [ 14 ] Reproducibility for the experiment was high. The study found that the catalyst (Cr/Al 2 O 3 ) activity increased to a sustained maximum of 10% after 28 minutes — an industrially useful insight into the working stability of a catalyst. Use of mass spectrometry as a second component of an operando experiment allows for optical spectra to be obtained before obtaining a mass spectrum of the analytes. [ 22 ] Electrospray ionization allows a wider range of substances to be analysed than other ionization methods, due to its ability to ionize samples without thermal degradation. In 2017, Prof. Frank Crespilho and coworks introduced a new approach to operando DEMS, aiming the enzyme activity evaluation by differential electrochemical mass spectrometry (DEMS). NAD-dependent alcohol dehydrogenase (ADH) enzymes for ethanol oxidation were investigated by DEMS. The broad mass spectra obtained under bioelectrochemical control and with unprecedented accuracy were used to provide new insight into the enzyme kinetics and mechanisms. [ 23 ] Operando spectroscopy has become a vital tool for surface chemistry. Nanotechnology , used in materials science , involves active catalytic sites on a reagent surface with at least one dimension in the nano-scale of approximately 1–100 nm. As particle size decreases, surface area increases. This results in a more reactive catalytic surface. [ 24 ] The reduced scale of these reactions affords several opportunities while presenting unique challenges; for example, due to the very small size of the crystals (sometimes <5 nm), any X-ray crystallography diffraction signal may be very weak. [ 25 ] As catalysis is a surface process, one particular challenge in catalytic studies is resolving the typically weak spectroscopic signal of the catalytically active surface against that of the inactive bulk structure. Moving from the micro to the nano scale increases the surface to volume ratio of the particles, maximizing the signal of the surface relative to that of the bulk. [ 25 ] Furthermore, as the scale of the reaction decreases towards nano scale, individual processes can be discerned that would otherwise be lost in the average signal of a bulk reaction [ 25 ] composed of multiple coincident steps and species such as spectators, intermediates, and reactive sites. [ 14 ] Operando spectroscopy is widely applicable to heterogeneous catalysis , which is largely used in industrial chemistry. An example of operando methodology to monitor heterogeneous catalysis is the dehydrogenation of propane with molybdenum catalysts commonly used in industrial petroleum. [ 26 ] Mo/SiO 2 and Mo/Al 2 O 2 were studied with an operando setup involving EPR / UV-Vis , NMR/UV-Vis, and Raman . The study examined the solid molybdenum catalyst in real time. It was determined that the molybdenum catalyst exhibited propane dehydrogenation activity, but deactivated over time. The spectroscopic data showed that the most likely catalytic active state was Mo 4+ in the production of propene. The deactivation of the catalyst was determined to be the result of coke formation and the irreversible formation of MoO 3 crystals, which were difficult to reduce back to Mo 4+ . [ 7 ] [ 26 ] The dehydrogenation of propane can also be achieved with chromium catalysts, through the reduction of Cr 6+ to Cr 3+ . [ 7 ] Propylene is one of the most important organic starting materials is used globally, particularly in the synthesis of various plastics. Therefore, the development of effective catalysts to produce propylene is of great interest. [ 27 ] Operando spectroscopy is of great value to the further research and development of such catalysts. Combining operando Raman, UV–Vis and ATR-IR is particularly useful for studying homogeneous catalysis in solution. Transition-metal complexes can perform catalytic oxidation reactions on organic molecules; however, much of the corresponding reaction pathways are still unclear. For example, an operando study of the oxidation of veratryl alcohol by salcomine catalyst at high pH [ 7 ] determined that the initial oxidation of the two substrate molecules to aldehydes is followed by the reduction of molecular oxygen to water, and that the rate determining step is the detachment of the product. [ 28 ] Understanding organometallic catalytic activity on organic molecules is incredibly valuable for the further development of material science and pharmaceuticals.
https://en.wikipedia.org/wiki/Operando_spectroscopy
Operating capacity , or rated operating capacity (ROC), has to do with the calculated tipping load. The capacity (load) that one can safely pick-up and operate without flipping or nose-diving the equipment. Not to be confused with Operating weight . [ citation needed ] [ 1 ] The definitive range of operating capacity is the asset within which a company hopes to operate—commonly during a short-term period. [ 2 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operating_capacity
An operating context ( OC ) for an application is the external environment that influences its operation. For a mobile application, the OC is defined by the hardware and software environment in the device, the target user, and other constraints imposed by various other stakeholders, such as a carrier. This concept differs from the operating system (OS) by the impact of these various other stakeholders. Here is an example of one device, with one operating system, changing its operating context without changing the OS. A user with a mobile phone changes SIM cards , removing card A, and inserting card B. The phone will now make any network calls over cell phone carrier B's network, rather than A's. Any applications running on the phone will run in a new operating context, and will often have to change functionality to adapt to the abilities, and business logic, of the new carrier. The network, spectrum, and wireless protocol all change in this example. These changes must be reflected back to the user, so the user knows what experience to expect, and thus these changes all change the user interface (UI) also. Situations exist where one can program in a context, with less concern about what hardware it will actually run on. Examples include Flash and Android . Unfortunately, it also quite common that code in a hardware free context will see hardware specific bugs. This is common with software written for, that interacts more directly with, personal computer (PC) hardware, or mobile phones . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operating_context
Operating deflection shape ( ODS ), is a term often used in the structural vibration analysis , known as ODS analysis. ODS analysis is a method used for visualisation of the vibration pattern of a machine or structure as influenced by its own operating forces. This is as opposed to the study of the vibration pattern of a machine under an (known) external force analysis, which is called modal analysis . [ 1 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operating_deflection_shape
The operating point is a specific point within the operation characteristic of a technical device. This point will be engaged because of the properties of the system and the outside influences and parameters. In electronic engineering establishing an operating point is called biasing . The operating point of a system is the intersection point of the torque-speed curve of drive and machine. Both devices are linked with a shaft so the speed is always identical. The drive creates the torque which rotates both devices. The machine creates the counter-torque, e.g. by being a moved device which needs permanent energy or a wheel turning against the static friction of the track. At the operating point, the driving torque and the counter-torque are balanced, so the speed does not change anymore. A change in speed out of this stable operating point is only possible with a new control intervention. This can be changing the load of the machine or the power of the drive which both changes the torque because it is a change in the characteristic curves. The drive-machine system then runs to a new operating point with a different speed and a different balance of torques. Should the drive torque be higher than the counter torque at any time then the system does not have an operating point. The result will be that the speed increases up to the idle speed or even until destruction. Should the counter torque be higher at any times the speed will decrease until the system stops. Also in case of an unstable operating point the law of the balance of the torques is always valid. But when the operating point is unstable then the characteristics of drive and machine are nearly parallel. In such a case a small change in torque will result in a big change of speed. In practice no device has a characteristics which is so thin that the intersection point can be clearly expected. Because of parallel characteristics, inner and outer friction as well as mechanical imperfections the unstable operating point is rather a band of possible operating states instead of a point. Running at an unstable operating point is therefore undesirable. The middle point on the curve in the third picture on the right is an unstable point, too. However the above-mentioned assumptions are not valid here. Torque and speed are the same but in case the speed will be increased only little then the torque of the drive will be much higher than the counter-torque of the machine. The same but vice versa applies when reducing the speed. For this reason this operating point does not have a stabilizing effect on the speed. The speed will run away to the left or the right side of the point and the drive will run stable there. In the lower right picture the electrical drive (AC motor) moves a conveyor belt. This type of machine has a nearly constant counter-torque over the whole range of speed. By choosing the incorrect drive (incorrect in size and type) there will be three possible operating points with the necessary working torque. Naturally the operating point with the highest speed is needed because only there will be the highest mechanical power (which is proportional to torque times speed). At the other operating points the majority of the electrical power (proportional only to the torque) will be only converted into heat inside the drive. Despite the bad power balance the drive can also overheat this way. In the example shown in picture three the desired right operating point with same torque but higher speed (and therefore higher power) cannot be reached alone after starting the drive. The reason is the technically induced decrease of the drive characteristics in the middle of the curve. The speed will reach this area but not increase further. In case of such machines with constant torques a coupling can be used to prevent stopping during start up, it should be rotation speed dependent. (Of course a motor bigger in size would also do, but this is not as economical). With the coupling the counter torque will only be introduced when the load-less drive has reached a speed outside of the unstable working point. Then the drive can safely speed up. Alternatively a drive with an adequate characteristic can be chosen. In the past shunt-motors have been used for this purpose, nowadays asynchronous AC motors are being used or AC motors in combination with a variable frequency drive . In an electronic amplifier , an operating point is a combination of current and voltage at "no signal" conditions; application of a signal to the stage - changes voltage and current in the stage. The operating point in an amplifier is set by the intersection of the load line with the non-linear characteristics of the device. By adjusting the bias on the stage, an operating point can be selected that maximizes the signal output of the stage and minimizes distortion.
https://en.wikipedia.org/wiki/Operating_point
Operation: Bot Roast is an operation by the FBI to track down bot herders , crackers, or virus coders who install malicious software on computers through the Internet without the owners' knowledge, which turns the computer into a zombie computer that then sends out spam to other computers from the compromised computer, making a botnet or network of bot infected computers. The operation was launched because the vast scale of botnet resources poses a threat to national security . [ 1 ] The operation was created to disrupt and disassemble bot herders. In June 2007, the FBI had identified about 1 million computers that were compromised, leading to the arrest of the persons responsible for creating the malware . In the process, owners of infected computers were notified, many of whom were unaware of the exploitation. [ 1 ] [ 2 ] Some early results of the operation include charges against the following:
https://en.wikipedia.org/wiki/Operation:_Bot_Roast
Operation Dark Winter was the code name for a senior-level bio-terrorist attack simulation conducted on June 22–23, 2001. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It was designed to carry out a mock version of a covert and widespread smallpox attack on the United States. Tara O'Toole and Tom Inglesby of the Johns Hopkins Center for Civilian Biodefense Strategies (CCBS) / Center for Strategic and International Studies (CSIS), and Randy Larsen and Mark DeMier of Analytic Services were the principal designers, authors, and controllers of the Dark Winter project. Dark Winter was focused on evaluating the inadequacies of a national emergency response during the use of a biological weapon against the American populace. The exercise was intended to establish preventive measures and response strategies by increasing governmental and public awareness of the magnitude and potential of such a threat posed by biological weapons. Dark Winter's simulated scenario involved an initial localized smallpox attack on Oklahoma City, Oklahoma , with additional smallpox attack cases in Georgia and Pennsylvania . The simulation was then designed to spiral out of control, and to be an inherently unwinnable scenario. This would create a contingency in which the National Security Council struggles to determine both the origin of the attack as well as deal with containing the spreading virus. By not being able to keep pace with the disease's rate of spread, a new catastrophic contingency emerges in which massive civilian casualties would overwhelm America's emergency response capabilities. The disastrous contingencies that would result in the massive loss of civilian life were used to exploit the weaknesses of the U.S. health care infrastructure and its inability to handle such a threat. The contingencies were also meant to address the widespread panic that would emerge and which would result in mass social breakdown and mob violence. Exploits would also include the many difficulties that the media would face when providing American citizens with the necessary information regarding safety procedures. Discussing the outcome of Dark Winter, Bryan Walsh noted "The timing--just a few months before the 9/11 attack--was eerily prescient, as if the organizers had foreseen how the threat of terrorism, including bioterrorism, would come to consume the U.S. government and public in the years to come." [ 5 ] According to UPMC's Center for Health Security, Dark Winter outlined several key findings with respect to the United States healthcare system's ability to respond to a localized bioterrorism event: An attack on the United States with biological weapons could threaten vital national security interests. [ 6 ] In addition to the possibility of massive civilian casualties, Dark Winter outlined the possible breakdown in essential institutions, resulting in a loss of confidence in government, followed by civil disorder, and a violation of democratic processes by authorities attempting to restore order. Shortages of vaccines and other drugs affected the response available to contain the epidemic, as well as the ability of political leaders to offer reassurance to the American people. [ 7 ] This led to great public anxiety and flight by people desperate to get vaccinated, and it had a significant effect on the decisions taken by the political leadership. [ 7 ] In addition, Dark Winter revealed that a catastrophic biowarfare event in the United States would lead to considerably reduced U.S. strategic flexibility abroad. [ 6 ] Current organizational structures and capabilities are not well suited for the management of a biowarfare attack. [ 6 ] Dark Winter revealed that major "fault lines" exist between different levels of government (federal, state, and local), between government and the private sector, among different institutions and agencies, and within the public and private sector. Leaders are unfamiliar with the character of bioterrorist attacks, available policy options, and their consequences. Federal and state priorities may be unclear, differ, or conflict; authorities may be uncertain; and constitutional issues may arise. [ 7 ] For example, state leaders wanted control of decisions regarding the imposition of disease-containment measures (e.g., mandatory vs. voluntary isolation and vaccination), [ 7 ] the closure of state borders to all traffic and transportation, [ 7 ] and when or whether to close airports. [ 7 ] Federal officials, on the other hand, argued that such issues were best decided on a national basis to ensure consistency and to give the President maximum control of military and public-safety assets. [ 7 ] Leaders in states most affected by smallpox wanted immediate access to smallpox vaccine for all citizens of their states, [ 7 ] but the federal government had to balance these requests against military and other national priorities. [ 7 ] State leaders were opposed to federalizing the National Guard, which they were relying on to support logistical and public supply needs, [ 7 ] while a number of federal leaders argued that the National Guard should be federalized. [ 7 ] There is no surge capability in the U.S. healthcare and public health systems, [ 7 ] or in the pharmaceutical and vaccine industries. [ 6 ] The exercise was designed to simulate a sudden and unexpected biowarfare event for which the United States healthcare system was unprepared. In the absence of sufficient preparation, Dark Winter revealed that the lack of sufficient vaccine or drugs to prevent the spread of disease severely limited management options. [ 7 ] Due to the institutionally limited "surge capacity" of the American healthcare system, hospitals quickly became overwhelmed and rendered effectively inoperable by the sudden and continued influx of new cases, exacerbated by patients with common illnesses who feared they might have smallpox, [ 7 ] and people who were otherwise healthy but concerned about their possible exposure. [ 7 ] The challenges of making correct diagnoses and rationing scarce resources, combined with shortages of health care staff, who were themselves worried about becoming infected or bringing infection home to their families, imposed a huge burden on the health care system. [ 6 ] The simulation also noted that while demand was highest in cities and states that had been directly attacked, [ 7 ] by the time victims became symptomatic, they were geographically dispersed, with some having traveled far from the original attack site. [ 7 ] The simulation also found that without sufficient surge capability, public health agencies' analysis of the scope, source, and progress of the epidemic was greatly impeded, as was their ability to educate and reassure the public, and their capacity to limit casualties and the spread of disease. [ 6 ] For example, even after the smallpox attack was recognized, decisionmakers were confronted with many uncertainties and wanted information that was not immediately available. (In fact, they were given more information on locations and numbers of infected people than would likely be available in reality.) [ 7 ] Without accurate and timely information, participants found it difficult to quickly identify the locations of the original attacks; to immediately predict the likely size of the epidemic on the basis of initial cases; to know how many people were exposed; to find out how many were hospitalized and where; or to keep track of how many had been vaccinated. [ 7 ] Dealing with the media will be a major immediate challenge for all levels of government. [ 6 ] Dark Winter revealed that information management and communication (e.g., dealing with the press effectively, communication with citizens, maintaining the information flows necessary for command and control at all institutional levels) will be a critical element in crisis/consequence management. For example, participants worried that it would not be possible to forcibly impose vaccination or travel restrictions on large groups of the population without their general cooperation. [ 7 ] To gain that cooperation, the President and other leaders in Dark Winter recognized the importance of persuading their constituents that there was fairness in the distribution of vaccine and other scarce resources, [ 7 ] that the disease-containment measures were for the general good of society, [ 7 ] that all possible measures were being taken to prevent the further spread of the disease, [ 7 ] and that the government remained firmly in control despite the expanding epidemic. [ 7 ] Should a contagious bioweapon pathogen be used, containing the spread of disease will present significant ethical, political, cultural, operational, and legal challenges. [ 6 ] In Dark Winter, some members advised the imposition of geographic quarantines around affected areas, but the implications of these measures (e.g., interruption of the normal flow of medicines, food and energy supplies, and other critical needs) were not clearly understood at first. [ 7 ] In the end, it is not clear whether such draconian measures would have led to a more effective interruption of disease spread. [ 7 ] What's more, allocation of scarce resources necessitated some degree of rationing, [ 7 ] creating conflict and significant debate between participants representing competing interests.
https://en.wikipedia.org/wiki/Operation_Dark_Winter
Operation Denver [ 3 ] [ 4 ] [ 5 ] (sometimes referred to as "Operation INFEKTION") was an active measure disinformation campaign run by the KGB in the 1980s to plant the idea that the United States had invented HIV/AIDS [ 6 ] [ 7 ] as part of a biological weapons research project at Fort Detrick , Maryland . Historian Thomas Boghardt popularized the codename "INFEKTION" based on the claims of former East German Ministry for State Security (Stasi) officer Günter Bohnsack [ de ] , who claimed that the Stasi codename for the campaign was either "INFEKTION" or perhaps also "VORWÄRTS II" ("FORWARD II"). [ 6 ] However, historians Christopher Nehring and Douglas Selvage found in the former Stasi and Bulgarian State Security archives materials that prove the actual Stasi codename for the AIDS disinformation campaign was Operation Denver. [ 8 ] [ 9 ] The operation involved "an extraordinary amount of effort – funding radio programs, courting journalists, distributing would-be scientific studies", and even became the subject of a report by Dan Rather on the CBS Evening News . [ 10 ] The Soviet Union used the campaign to undermine the United States' credibility, foster anti-Americanism , isolate America abroad, and create tensions between host countries and the U.S. over the presence of American military bases , which were often portrayed as the cause of AIDS outbreaks in local populations. [ 11 ] The groundwork appeared in the pro-Soviet Indian newspaper Patriot which, according to a KGB defector named Ilya Dzerkvelov, which the KGB had set up in 1962 for the sheer purpose of publishing disinformation . [ 11 ] An anonymous letter was sent to the editor in July 1983 from a "well-known American scientist and anthropologist" who claimed that AIDS was manufactured at Fort Detrick by genetic engineers. The "scientist" claimed that "that deadly mysterious disease was believed to be the results of the Pentagon 's experiments to develop new and dangerous biological weapons", and implicated Centers for Disease Control and Prevention (CDC) scientists sent to Africa and Latin America to find dangerous viruses alien to Asia and Europe . These results were purportedly analyzed in Atlanta and Fort Detrick and thus the "most likely course of events" leading to the development of AIDS. The letter claimed that The Pentagon was continuing such experiments in neighboring Pakistan and as a result, the AIDS virus was threatening to spread to India . The title of the article, "AIDS may invade India", suggested that the immediate goal of the KGB's disinformation was to exacerbate tensions between the U.S., India, and Pakistan. [ 11 ] [ 12 ] Two years later, the KGB apparently decided to make use of its earlier disinformation to launch an international campaign to discredit the U.S. They wrote in a telegram to their allied secret service in Bulgaria, the Bulgarian Committee for State Security (KDS) on September 7, 1985: We are conducting a series of [active] measures in connection with the appearance in recent years in the USA of a new and dangerous disease, "Acquired Immune Deficiency Syndrome – AIDS"…, and its subsequent, large-scale spread to other countries, including those in Western Europe. The goal of these measures is to create a favorable opinion for us abroad that this disease is the result of secret experiments with a new type of biological weapon by the secret services of the USA and the Pentagon that spun out of control. [ 8 ] [ 13 ] The telegram, which referred indirectly back to the Patriot article ("facts ... in the press of the developing countries, in particular India"), provided guidance to Bulgarian State Security regarding how to couch their AIDS disinformation: Facts have already been cited in the press of the developing countries, in particular India, that testify to the involvement of the special services of the United States and the Pentagon in the appearance and rapid spread of the AIDS disease in the United States, as well as other countries. Judging by these reports, along with the interest shown by the U.S. military in the symptoms of AIDS and the rate and geography of its spread, the most likely assumption is that this most dangerous disease is the result of yet another Pentagon experiment with a new type of biological weapon. This is confirmed by the fact that the disease affected initially only certain groups of people: homosexuals, drug addicts, immigrants from Latin America. [ 13 ] A month later, the Soviet newspaper Literaturnaya Gazeta , also a known outlet for KGB disinformation, [ 14 ] published an article from Valentin Zapevalov entitled, "Panic in the West, or what is hiding behind the sensation surrounding AIDS". It cited the (dis)information contained in the Patriot article, [ 15 ] but also gave further details regarding the alleged development of the AIDS virus. Employees of the CDC had allegedly assisted the Pentagon by traveling to Zaire , Nigeria and Latin America to collect samples of the "most pathogenic viruses" that could not be found in Europe or Asia. These samples were then combined to develop the human immunodeficiency virus (HIV) that causes AIDS. The disinformation campaign insisted the Pentagon then carried out isolated experiments in Haiti and within the U.S. itself on marginalized groups in U.S. society: drug addicts, homosexuals, and the homeless. [ 16 ] Zapevalov's article was subsequently reprinted in Kuwait, Bahrain, Finland, Sweden, Peru, and other countries. [ 17 ] It followed very closely the guidelines that the KGB had already sent to its Bulgarian "comrades" a month before. [ 13 ] Determining the exact role of the Stasi in the AIDS disinformation campaign has been difficult, given that around 90% of the records of its foreign intelligence division, the Main Directorate for Reconnaissance (HVA) were destroyed [ 18 ] or disappeared [ 19 ] in 1989–90. Based on materials in the Bulgarian secret police archives, the card files of the HVA, and documents from or relating to the HVA scattered among the records of other divisions of the Stasi, it has been possible to reconstruct some aspects of the Stasi's involvement in the disinformation campaign. At the beginning of September 1986, the tenth division of the HVA (HVA/X), responsible for organizing and coordinating the HVA's campaigns of active measures, wrote the following in a draft plan for cooperation with Bulgarian State Security: Operation "DENVER" . With the goal of exposing the dangers to mankind arising from the research, production, and use of biological weapons, and also in order to strengthen anti-American sentiments in the world and to spark domestic political controversies in the USA, the GDR [ German Democratic Republic ] side will deliver a scientific study and other materials that prove that AIDS originated in the USA, not in Africa, and that AIDS is a product of the USA’s bioweapons research. [ 8 ] [ 20 ] The KGB confirmed that the East German HVA was playing a central role on various occasions, including in a telegram to the Bulgarians in 1987: The AIDS issue A complex of [active] measures regarding this issue has been carried out since 1985 in cooperation with the [East] German and to some extent the Czech colleagues. In the initial stage, the task was resolved of spreading in the mass media the version regarding the artificial origin of the AIDS virus and the Pentagon’s involvement in by means of the military-biological laboratory at Fort Detrick. As a result of our joint efforts, it was possible to widely disseminate this version. [ 8 ] [ 21 ] As noted above, the Stasi's HVA/X had written that it would send its Bulgarian "comrades" a "scientific study" allegedly "proving" that "AIDS is a product of the USA's bioweapons research". [ 8 ] [ 20 ] From the context of the discussions between officers of the HVA/X and their Bulgarian counterparts in mid-September 1986, it was clear which study was meant: "AIDS: Its Nature and Origin" by Soviet-East German biologist Jakob Segal and his wife, Lilli Segal. The study had been distributed at the summit meeting of the Non-Aligned Movement in August–September 1986 in a brochure entitled, "AIDS: USA home-made evil, NOT out of AFRICA". [ 8 ] The report was quoted heavily by Soviet propagandists, and the Segals were often said to be French researchers to hide their connections to communism. Although both Segals, given the double danger to them as Jews and members of the Communist Party of Germany , had fled into exile in France in 1933, both had attained Soviet citizenship in 1940 on the basis of Jakob's birth in then Soviet-annexed Lithuania, and in 1953, they had returned to Germany—specifically, to communist East Berlin . [ 22 ] In his report, Segal postulated that the AIDS virus was synthesized by combining parts of two distantly related retroviruses: VISNA and HTLV-1 . [ 11 ] An excerpt of the Segal Report reads as follows: It is very easy using genetic technologies to unite two parts of completely independent viruses… but who would be interested in doing this? The military, of course… In 1977 a special top security lab… was set up…at the Pentagon's central biological laboratory. One year after that… the first cases of AIDS occurred in the US, in New York City . How it occurred precisely at this moment and how the virus managed to get out of the secret, hush-hush laboratory is quite easy to understand. Everyone knows that prisoners are used for military experiments in the U.S. They are promised their freedom if they come out of the experiment alive. [ 11 ] Elsewhere in the report, Segal said that his hypothesis was based purely on assumptions, extrapolations, and hearsay and not at all on direct scientific evidence. [ 11 ] The exact relationship of both Segals to the KGB, Stasi, or both at this time—to the extent that it existed—remains unclear. Both publicly denied any involvement of the KGB or Stasi in their work. The Deputy Director of HVA/X, Wolfgang Mutz, hinted that the HVA had played a role in the publication—or actually, the photocopying—and distribution of the Harare brochure in talks with Bulgarian State Security in September 1986. [ 8 ] [ 23 ] He also suggested that the "operational division" of the HVA with which HVA/X had been cooperating in the disinformation campaign had somehow "attracted" Segal to his research. [ 24 ] This "operational division" was in fact an office in the Sector for Science and Technology ( Sektor Wissenschaft und Technik , SWT) of the HVA, responsible for intelligence-gathering on AIDS and genetic engineering (HVA/SWT/XIII/5). This office had registered a "security dossier" ( Sicherungsvorgang , SVG) "Wind" on September 6, 1985, regarding the protection of East German scientists in the areas of AIDS research, genetic engineering and biotechnology from outside "attacks" in the form of espionage or manipulation by foreign agents. This office in HVA/SWT apparently registered both Segals in this dossier as "contact persons" under the codename "Diagnosis"; whenever other divisions of the Stasi inquired about the Segals, they were directed to this office. HVA/SWT—or "the security", as Jakob Segal called them—gave him at least one piece of advice regarding his study before its printing and distribution. Whether Segal listened to this advice remains unclear. Still, given their official designation as "contact persons", they need not have known, at least officially, that they were dealing with the Stasi, although Jakob Segal likely knew or could have guessed, given his past dealings with both the Stasi and the KGB. It is quite possible that HVA/SWT was already coordinating with the KGB regarding Segal's research—even without his knowledge—in the second half of 1985, at the time that "Wind" was registered. [ 25 ] Nevertheless, none of the Stasi officers involved with "Wind" or Operation "DENVER" ever claimed that the HVA had played a role in drafting Segal's study. It was clearly his own work, in cooperation with his wife Lilli, although he knew and expected that it would be used for "propaganda". [ 26 ] Whatever exact relationship the Segals may or may not have had to the Soviet or East German security services, the KGB praised Segal's work in its 1987 telegram to Bulgarian State Security. His articles and brochures, the KGB wrote, had attained "great renown". This was especially the case in African countries, where governments and researchers were rejecting as racist assertions by U.S. researchers that AIDS had originated naturally in Africa, where it had spread from monkeys to humans. [ 21 ] The KGB wrote the Bulgarians: We are currently resolving the task of bringing the [active] measures down to a more practical level, and in particular, to attain specific political results by exploiting the "laboratory version" for AM [active measures] on other issues. So, efforts are being made to intensify anti-base sentiments in countries where American forces are deployed by using slogans suggesting that U.S. soldiers are the most dangerous carriers of the virus. By demonstrating the defeat of the "African version" [of AIDS' origins], we can whip up anti-American sentiments throughout the states of the continent. [ 21 ] The AIDS story exploded across the world, and was repeated by Soviet newspapers, magazines, wire services, radio broadcasts, and television. It appeared 40 times in Soviet media in 1987 alone. It received coverage in over 80 countries in more than 30 languages, [ 11 ] primarily in leftist and communist media publications, and was found in countries as widespread as Bolivia, Grenada, Pakistan, New Zealand, Nigeria, and Malta. A few versions made their way into non-communist press in Indonesia and the Philippines. [ 11 ] Dissemination was usually along a recognized pattern: propaganda and disinformation would first appear in a country outside of the USSR and only then be picked up by a Soviet news agency, which attributed it to others' investigative journalism . That the story came from a foreign source (not widely known to be Soviet controlled or influenced) added credibility to the allegations, especially in impoverished and less educated countries which generally could not afford access to Western news satellite feeds. To aid in media placement, Soviet propaganda was provided free of charge, and many stories came with cash benefits. [ 11 ] This was particularly the case in India and Ghana, where the Soviet Union maintained a large propaganda and disinformation apparatus for covert media placement. [ 11 ] To explain how AIDS outbreaks in Africa occurred simultaneously, the Moscow World Service announced a discovery by Soviet correspondent Aleksandr Zhukov, who claimed that in the early 1970s, a Pentagon-controlled West German lab in Zaire "succeeded in modifying the non-lethal Green Monkey virus into the deadly AIDS virus". Radio Moscow also claimed that instead of testing a cholera vaccine , American scientists were actually infecting unwitting Zairians, thus spreading AIDS throughout the continent. These scientists were unaware of the long period before symptom onset, and resumed experimentation on convicts upon return to the U.S., where it then spread when the prisoners escaped. [ 11 ] Claims that the Central Intelligence Agency (CIA) had sent "AIDS-oiled condoms" to other countries sprang up independently in the African press, well after the disinformation operation started. [ 6 ] In 1987, a book ( Once Again About the CIA ) was published by the Novosti Press Agency , with the quote: The CIA Directorate of Science and Technology is continuously modernizing its inventory of pathogenic preparations, bacteria and viruses and studying their effect on man in various parts of the world. To this end, the CIA uses American medical centers in foreign countries. A case in point was the Pakistani Medical Research Center in Lahore … set up in 1962 allegedly for combating malaria . The resulting public backlash eventually closed down the legitimate medical research centre. Soviet allegations declared the purpose of these research projects, to include that of AIDS, was to "enlarge the war arsenal". [ 11 ] Ironically, many Soviet scientists were soliciting help from American researchers to help address the Soviet Union's burgeoning AIDS problem, while stressing the virus' natural origins. The U.S. refused to help as long as the disinformation campaign continued. [ 11 ] The Segal Report and the plenitude of press articles were dismissed by both Western and Soviet virologists as nonsense. [ 11 ] Dr. Meinrad Koch, a West Berlin AIDS expert, stated in 1987 that the Segal Report was "utter nonsense" and called it an "evil pseudo-scientific political concoction". Other scientists also pointed out flaws and inaccuracies in the Segal Report, including Dr. Viktor Zhdanov of the D. I. Ivanovsky Institute of Virology [ ru ] in Moscow , who was the top Soviet AIDS expert at the time. The president of the USSR Academy of Medical Sciences clearly stated that he believed the virus to be of natural origin. Other scientists and doctors from Paris , East and West Berlin, India, and Belgium called the AIDS rumors lies, scientifically unfounded, and otherwise impossible to seriously consider. [ 11 ] Although Segal himself never said "this is fact" and was very careful to maintain this line throughout his report, "such technical qualifiers do not diminish the impact of the charges, however, because when they are replayed, such qualifiers are typically either omitted or overlooked by readers or listeners". [ 11 ] U.S. Embassy officials wrote dozens of letters to various newspaper editors and journalists, and held meetings and press conferences to clarify matters. Many of their efforts resulted in newspapers printing retractions and apologies. [ 11 ] Rebuttals appeared in reports to Congress and from the State Department saying that it was impossible at the time to build a virus as complex as AIDS; medical research had only gotten so far as to clone simple viruses. Antibodies were found decades earlier than the reported research started, and the main academic source used for the story (Segal Report) contained inaccuracies about even such basic things as American geography—Segal said that outbreaks appeared in New York City because it was the closest big city to Fort Detrick. Philadelphia , Baltimore , and Washington, D.C. are all closer, while New York is 200 miles (320 km) away. [ 11 ] The Gorbachev administration also responded indignantly and launched a defensive denial campaign "aimed at limiting the damage done to its credibility by U.S. efforts to raise world consciousness concerning the scope of Soviet disinformation activities". [ 11 ] The Soviet Union interfered with general attempts by U.S. Embassy officials to address misconceptions and expose the Soviet disinformation campaign, including placing pressure on news agencies that recanted their position. For example, Literaturnaya Gazeta on December 3, 1986, castigated a Brazilian newspaper which earlier in the year had run a retraction following its publication of the AIDS disinformation story. In 1987, Moscow's Novosti news agency disseminated a report datelined Brazzaville (Congo), calling on the West to put an end to the "anti-African campaign", and reiterating "the charges that the virus was created in U.S. military laboratories" while in 1986 Literaturnaya Gazeta warned specifically against contact with Americans. [ 11 ] In 1988, Sovetskaya Rossiya put out an article defending their right to report different views. The chief of Novosti stated that it drew upon foreign sources for much of the AIDS coverage, and that the press was free under glasnost . [ 11 ] The Mitrokhin Archive reveals that: Faced with American protests and the denunciation of the story by the international scientific community, however, Gorbachev and his advisers were clearly concerned that exposure of Soviet disinformation might damage the new Soviet image in the West. In August 1987 US officials were told in Moscow that the Aids story was officially disowned. Soviet press coverage of the story came to an almost complete halt. [ 28 ] The campaign faded from most Soviet media outlets, but it occasionally resurfaced abroad in Third World countries as late as 1988, usually via press placement agents. [ 11 ] In 1992, 15% of Americans considered it definitely or probably true that "the AIDS virus was created deliberately in a government laboratory". [ 6 ] In 2005, a study by the RAND Corporation and Oregon State University revealed that nearly 50% of African Americans thought AIDS was man-made, over 25% believed AIDS was a product of a government laboratory, 12% believed it was created and spread by the CIA, and 15% believed that AIDS was a form of genocide against black people . [ 6 ] Other AIDS conspiracy theories have abounded, and have been discredited by the mainstream scientific community. In popular culture, the Kanye West song " Heard 'Em Say " tells listeners, "I know that the government administer AIDS," and the R.E.M. song "Revolution" says in its lyrics: "The virus was invented." [ 29 ] [ 30 ] In South Africa, the former president, Thabo Mbeki cited the operation's theory of Fort Detrick in denying the science of HIV. [ 10 ] [ 7 ] In 1992, Director of Russia's Foreign Intelligence Service (SVR) Yevgeny Primakov admitted that the KGB was behind the newspaper articles claiming that AIDS was created by the U.S. government. [ 2 ] Segal's role was exposed by KGB defector Vasili Mitrokhin in the Mitrokhin Archive. Jack Koehler 's 1999 book, Stasi: The Untold Story of the East German Secret Police , describes how the Stasi cooperated with the KGB to spread the story. [ 31 ] [ page needed ] Insofar as the distrust in medical authorities created by the operation led to a distrust in the treatment for AIDS recommended by medical science ( "numerous studies ... have shown that those who disbelieve the science on the origins of H.I.V. are less likely to engage in safe sex or to regularly take recommended medication if infected"), [ 10 ] the operation may have cost many lives. Yaffa argued that the delay in "widespread implementation of antiretroviral therapies in South Africa" may have cost "as many" as 330,000 lives. [ 10 ] [ 7 ] Media related to Operation INFEKTION at Wikimedia Commons
https://en.wikipedia.org/wiki/Operation_Denver
Operation Groundhog was reported a joint US/Kazakh/Russian program to secure radioactive residues of Soviet-era nuclear bomb tests. In 2003, reports appeared in Science Magazine [ 1 ] that the program included paving some areas with thick layers of reinforced concrete to protect plutonium contaminating the ground, [ 2 ] in order to prevent terrorists from acquiring contaminated material for making a dirty bomb . [ 3 ] This radioactivity –related article is a stub . You can help Wikipedia by expanding it . This nuclear technology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operation_Groundhog
Operation LAC ( Large Area Coverage ) was a United States Army Chemical Corps operation which dispersed microscopic zinc cadmium sulfide (ZnCdS) particles over much of the United States and Canada in order to test dispersal patterns and the geographic range of chemical or biological weapons. [ 1 ] There were several tests that occurred prior to the first spraying affiliated with Operation LAC that proved the concept of large-area coverage. Canadian files relating to participation in the tests cite in particular three previous series of tests leading up to those conducted in Operation LAC. [ 2 ] In addition, the army admitted to spraying in Minnesota locations from 1953 into the mid-1960s. [ 3 ] In St. Louis in the mid 1950s, and again a decade later, the army sprayed zinc cadmium sulfide via motorized blowers atop Pruitt-Igoe , at schools, from the backs of station wagons, and via planes. [ 4 ] Operation LAC was undertaken in 1957 and 1958 by the U.S. Army Chemical Corps. [ 5 ] The operation involved spraying large areas with zinc cadmium sulfide. [ 3 ] The U.S. Air Force loaned the Army a C-119 , "Flying Boxcar", and it was used to disperse the materials by the ton in the atmosphere over the United States. [ 6 ] The first test occurred on December 2, 1957, along a path from South Dakota to International Falls, Minnesota . [ 7 ] The tests were designed to determine the dispersion and geographic range of biological or chemical agents. [ 6 ] Stations on the ground tracked the fluorescent zinc cadmium sulfide particles. [ 6 ] During the first test and subsequently, much of the material dispersed ended up being carried by winds into Canada. [ 7 ] However, as was the case in the first test, particles were detected up to 1,200 miles away from their drop point. [ 6 ] [ 7 ] A typical flight line covering 400 miles would release 5,000 pounds of zinc cadmium sulfide and in fiscal year 1958 around 100 hours were spent in flight for LAC. [ 7 ] That flight time included four runs of various lengths, one of which was 1,400 miles. [ 7 ] The December 2, 1957, test was incomplete due to a mass of cold air coming south from Canada. [ 7 ] It carried the particles from their drop point and then took a turn northeast, taking most of the particles into Canada with it. Military operators considered the test a partial success because some of the particles were detected 1,200 miles away, at a station in New York state. [ 7 ] A February 1958 test at Dugway Proving Ground ended similarly. Another Canadian air mass swept through and carried the particles into the Gulf of Mexico. [ 7 ] Two other tests, one along a path from Toledo, Ohio , to Abilene, Texas , and another from Detroit , to Springfield, Illinois , to Goodland, Kansas , showed that agents dispersed through this aerial method could achieve widespread coverage when particles were detected on both sides of the flight paths. [ 7 ] According to Leonard A. Cole , an Army Chemical Corps document titled "Summary of Major Events and Problems" described the scope of Operation LAC. [ 8 ] Cole stated that the document outlined that the tests were the largest ever undertaken by the Chemical Corps and that the test area stretched from the Rocky Mountains to the Atlantic Ocean, and from Canada to the Gulf of Mexico. [ 7 ] Other sources describe the scope of LAC varyingly; examples include, "Midwestern United States", [ 6 ] and "the states east of the Rockies". [ 3 ] Specific locations are mentioned as well. Some of those include: a path from South Dakota to Minneapolis, Minnesota, [ 5 ] Dugway Proving Ground , Corpus Christi, Texas , north-central Texas, and the San Francisco Bay area . [ 3 ] Bacillus globigii was used to simulate biological warfare agents (such as anthrax ), because it was then considered a contaminant with little health consequence to humans; however, BG is now considered a human pathogen. [ 9 ] Anecdotal evidence [ 3 ] exists of ZnCdS causing adverse health effects as a result of LAC. However, a U.S. government study, done by the U.S. National Research Council , stated, in part, "After an exhaustive, independent review requested by Congress, we have found no evidence that exposure to zinc cadmium sulfide at these levels could cause people to become sick." [ 10 ] Still, the use of ZnCdS remains controversial and one critic accused the Army of "literally using the country as an experimental laboratory". [ 11 ] According to the National Library of Medicine's TOXNET database, the EPA reported that Cadmium-sulfide was classified as a probable human carcinogen. [ 12 ]
https://en.wikipedia.org/wiki/Operation_LAC
1964–1965: Viet Cong offensive and American intervention 1966 campaign 1967 campaign 1968–1969: Tet Offensive and aftermath 1969–1971: Vietnamization 1972: Easter Offensive 1973–1974: Post- Paris Peace Accords 1975: Spring offensive Air operations Naval operations Lists of allied operations Operation Masher , also known as Operation White Wing , (24 January—6 March 1966) was the largest search and destroy mission that had been carried out in the Vietnam War up until that time. [ 6 ] It was a combined mission of the United States Army , Army of the Republic of Vietnam (ARVN), and Republic of Korea Army (ROK) in Bình Định Province on the central coast of South Vietnam. The People's Army of Vietnam (PAVN) 3rd Division , made up of two regiments of North Vietnamese regulars and one regiment of main force Viet Cong (VC) guerrillas, controlled much of the land and many of the people of Bình Định Province, which had a total population of about 800,000. [ 7 ] : 201 A CIA report in 1965 said that Binh Dinh was "just about lost" to the communists. [ 8 ] The name "Operation Masher" was changed to "Operation White Wing", because President Lyndon Johnson wanted the name changed to one that sounded more benign. Adjacent to the operational area of Masher/White Wing in Quang Ngai province the U.S. and South Vietnamese Marine Corps carried out a complementary mission called Operation Double Eagle . [ 9 ] The 1st Cavalry Division (Airmobile) was the principal U.S. ground force involved in Operation Masher and that operation was marked as a success by its commanders. Claims are made that the PAVN 3rd Division had been dealt a hard blow, but intelligence reports indicated that a week after the withdrawal of the 1st Cavalry PAVN soldiers were returning to take control of the area where Operation Masher had taken place. [ 9 ] [ 7 ] : 214–5 Most of the PAVN/VC had slipped away prior to or during the operation, [ 10 ] and discrepancy between weapons recovered and body count led to criticisms of the operation. [ 11 ] Allegations that there were a reported six civilian casualties for every reported PAVN/VC casualty during the Fulbright Hearings prompted growing criticism of US conduct of the war and contributed to greater public dissension at home. [ 5 ] : 266–8 During Operation Masher, the ROK Capital Division were alleged to have committed the Bình An/Tây Vinh massacre between 12 February and 17 March 1966, in which over 1,000 civilians were allegedly killed. [ 12 ] [ 4 ] The operation would create almost 125,000 homeless people in this province, and the PAVN/VC forces would reappear just months after the US had conducted the operation. [ 10 ] Bình Định Province was a traditional communist and VC stronghold. Binh Dinh consisted of a narrow, heavily cultivated coastal plain with river valleys separated by ridges and low mountains reaching into the interior. The main effort of the campaign in Binh Dinh would come on the Bồng Sơn Plain and in the mountains and valleys that bordered it. The plain, a narrow strip of land starting just north of the town of Bồng Sơn , ran northward along the coast into I Corps . Rarely more than 25 km wide, it consisted of a series of small deltas, which often backed into gently rolling terraces some 30-90m in height, and, at irregular intervals, of a number of mountainous spurs from the highlands. These spurs created narrow river valleys with steep ridges that frequently provided hideouts for PAVN/VC units or housed PAVN/VC command, control and logistical centers. The plain itself was bisected by the east-west Lai Giang River, which was in turn fed by two others, the An Lao , flowing from the northwest and the Kim Son, flowing from the southwest. These two rivers formed isolated but fertile valleys west of the coastal plain. The climate in the region was governed by the northeast monsoon. The heaviest rains had usually ended by December, but a light steady drizzle, which the French had called crachin weather and occasional torrential downpours could be expected to occur through March. These weather systems would at times limit the availability of air support. [ 7 ] : 202–3 The vital artery of Highway 1 ran north and south ran through Binh Dinh. The area of Operation Masher was about 30 miles (48 km) north to south and reached a maximum of 30 miles (48 km) inland from the South China Sea . The U.S. Marine's Operation Double Eagle extended northward from Masher and the ROK's Operation Flying Tiger extended southward. South Vietnamese forces participated in all three operations. [ 7 ] : 205 [ 9 ] The First Cavalry Division (Airmobile) was selected by U.S. Commander William Westmoreland to carry out the operation. The 1st Cavalry had borne the brunt of the combat during the Siege of Plei Me and the Battle of Ia Drang in October and November 1965, and some battalions of the 1st Cavalry had sustained heavy casualties. More than 5,000 soldiers in the division were recent arrivals in Vietnam with little combat experience. The South Vietnamese 22nd Division stationed in Binh Dinh had also suffered heavy casualties in recent fighting and was on the defensive. [ 7 ] : 201–2 The opposition to the American and South Vietnamese units participating in Operation Masher/White Wing was the PAVN 3rd Division consisting of approximately 6,000 soldiers in two regiments of PAVN regulars who had a recently infiltrated into South Vietnam via the Ho Chi Minh Trail and one regiment of VC guerrillas who had been fighting the South Vietnamese government since 1962. The majority of the population of Binh Dinh was believed to be supportive of the VC. [ 7 ] : 201–2 The plan of Operation Masher was for the U.S., South Vietnamese and ROK soldiers to sweep north and for the U.S. and South Vietnamese marines to sweep south catching and killing the PAVN/VC forces between the allied forces. Orders for the U.S. forces in Operation Masher were to "locate and destroy VC/NVA units; enhance the security of GVN [Government of South Vietnam] installations in [provincial capital] Bồng Sơn, and to lay the groundwork for restoration of GVN control of the population and rich coastal plain area." The primary metric for judging the success of the operation would be the body count of PAVN/VC soldiers killed. [ 13 ] The 1st Cavalry Division broke the campaign into two parts. During the first, primarily a preparation and deception operation, a brigade-size task force would establish a temporary command and forward supply base at Phu Cat on Highway 1 south of the area of operations, secure the highway somewhat northward, and start patrolling around Phu Cat to convey the impression that the true target area was well away from the plain. During the second, division elements would move to Bồng Sơn itself and launch a series of airmobile hammer-and-anvil operations around the plain and the adjacent valleys to flush the PAVN/VC toward strong blocking positions. General Harry Kinnard assigned the mission to Colonel Hal Moore 's 3rd Brigade , but if need be, he was ready to add a second brigade to the operation to intensify the pressure and pursuit. [ 7 ] : 203 On the morning of 25 January the men of the 3rd Brigade at Camp Radcliff began their move to staging areas in eastern Binh Dinh. Two battalions, Lieutenant colonel Raymond L. Kampe's 1st Battalion, 7th Cavalry Regiment and Lt. Col. Rutland D. Beard's 1st Battalion, 12th Cavalry Regiment went by road and air to Phu Cat, joined South Koreans in securing the airfield and support base, and carried out wide-ranging search and destroy actions nearby that met only light resistance. Meanwhile, Lt. Col. Robert McDade 's 2nd Battalion, 7th Cavalry, with about 80 percent of its authorized strength and thus still not fully reconstituted after the fight at LZ Albany , boarded a dozen C-123s at the airstrip for the short ride into Bồng Sơn. One of the C-123s crashed into mountains near An Khe, killing all four crewmen and 42 passengers on board. The rest of the battalion deployed without incident and then helicoptered north to Landing Zone Dog , where engineers started building an airstrip and digging in artillery. [ 7 ] : 203–4 [ 2 ] : 13 On paper, the hammer-and-anvil attack plan was not complicated. After 3rd Brigade elements secured mountain positions west of the Bồng Sơn and set up Firebases Brass and Steel, covering the northern and southern parts of the search area, 2/7th Cavalry would push north from LZ Dog and 2/12th Cavalry, also staging from LZ Dog, would work its way south from the opposite end of the target zone. Meanwhile, with the South Vietnamese Airborne Brigade acting as an eastern blocking force along Highway 1, 1/7th Cavalry would air-assault onto the high ground to the west and push east towards 2/7th Cavalry and 2/12th Cavalry. If PAVN/VC units were in the area, the 3rd Brigade would bring them to battle or destroy them as they fled. [ 7 ] : 204 Operation Masher began officially on the morning of 28 January 1966. Low clouds, wind and heavy rain prevented the movement of artillery to Firebase Brass. Lacking supporting fire, Moore cancelled the 2/12th Cavalry's mission. In the meantime, PAVN/VC fire downed a CH-47 helicopter at Landing Zone Papa north of Bồng Sơn and Kampe responded by sending a 1/7th Cavalry company to secure the crash site. When it too came under fire, he set aside his original mission, the attack east from the mountains and moved his two other companies to LZ Papa. By the time they arrived, however, the PAVN/VC had withdrawn. Kampe's units spent the night at the landing zone. McDade went ahead with the mission, directing his men to begin scouring the hamlets that started about 2 km north of LZ Dog and extended 4 km further up the plain. Company A, 2/7th Cavalry understrength at two rifle platoons because of the crash three days earlier, entered the area at Landing Zone 2 and pushed north through rice paddies. Company B flew to Firebase Steel to secure it for an artillery battery. [ 7 ] : 204 Company C deployed by helicopter to the northern edge of the target in order to sweep to the southwest. The sandy plain where it set down, Landing Zone 4, ( 14°31′48″N 109°01′26″E  /  14.53°N 109.024°E  / 14.53; 109.024 ) seemed safe, a relatively open tract in the hamlet of Phung Du 2 with a graveyard in its midst and tall palm trees on three sides. Company C omitted the artillery preparation that normally preceded a landing due to the proximity of the village. The first helicopter lift landed at LZ 4 at 08:25, with no PAVN/VC reaction. When the second lift came ten minutes later however, the PAVN 7th Battalion, 22nd Regiment, entrenched in earthworks, palm groves and bamboo thickets throughout the hamlet, poured mortar and machine gun fire into the landing zone. Company C commander, Captain Fesmire waved the second flight away, expecting the troops to be dropped at an alternative landing zone a few hundred meters to the southwest. Instead, they ended up at four nearby but scattered locations. Returning ten minutes later with a third lift, the helicopters unloaded the men at a fifth site. By 08:45 Company C was on the ground, but the unit was so fragmented and enemy fire so intense that the various parts found maneuver difficult and effective communication with one another impossible. Meanwhile, heavy rain impeded the provision of adequate air support, and the men were so dispersed that artillery was of little use. American casualties soon littered the hamlet ground. [ 7 ] : 204 McDade ordered Company A to reinforce Company C but when they reached the southern edge of the landing zone, they also came under fire. Although the men formed a perimeter near a paddy dike, they were soon pinned down and never reached Company C. Early in the afternoon McDade joined Company A, but to no effect. Finally, six helicopters carrying reinforcements from Company B reached LZ 4. But the effort generated so much PAVN fire that all six were hit and two were driven off. Only the command group and part of one platoon were able to land and they quickly found themselves in a cross fire. Under heavy rain McDade managed to locate the fragmented Company C and succeeded in bringing in artillery support. Meanwhile, the darkness and poor weather gave Fesmire the cover he needed to pull Company C together. As he prepared to settle in for the night, he received orders from McDade to move south, closer to the rest of the battalion. Under heavy fire, he completed the linkup at 04:30 ( 14°31′23″N 109°01′26″E  /  14.523°N 109.024°E  / 14.523; 109.024 ). Along with 20 wounded, his men carried with them the bodies of eight killed. [ 7 ] : 207–8 After dawn on 29 January the low overcast lifted, and fighter-bombers pounded the area to McDade's north, detonating PAVN ammunition and causing large fires. Soon after, McDade's companies, reinforced by 2/12th Cavalry, swept north to eliminate the last PAVN from the hamlet. But the clearing operation took another day, and was completed only when elements of 1/7th Cavalry joined the sweep out of the landing zone. [ 2 ] : 14 [ 7 ] : 208 From then on combat tapered off and Kinnard ordered an end to that phase of the operation, effective at 12:00 on 4 February. The 3rd Brigade had cleared elements of the 22nd Regiment from the coastal plain claiming 566 PAVN/VC killed. US losses were 123 dead (including the 42 troops and four crew killed in the C-123 crash) and two helicopters were shot down and 29 damaged. [ 7 ] : 208 On 28 January three Project DELTA U.S. Special Forces teams consisting of 17 personnel were inserted in the An Lao Valley for reconnaissance. The teams ran into immediate trouble and when rescued a day later seven had been killed and three wounded. Project DELTA Commander Major Charles Beckwith was seriously wounded while extracting the teams. The 1st Cavalry was unable to provide support due to the fight at LZ 4. Beckwith was criticized for going into the An Lao valley, under VC control for 15 years, without South Vietnamese counterparts and ground intelligence and in poor weather. [ 14 ] The An Lao Valley and the surrounding highlands were the next target of the 1st Cavalry. Kinnard believed that the headquarters of the PAVN 3rd Division were located there. [ 7 ] : 208 Bad weather delayed the beginning of the operation to 6 February. The U.S. Marines blocked the northern entrance of the valley, the ARVN blocked the southern entrance, and three battalions were landed in the valley, however the PAVN/VC forces had withdrawn. The 1st Cavalry discovered large caches of rice and defensive works, but reported killing only 11 PAVN/VC soldiers at a loss to American forces of 49 wounded. [ 7 ] : 209 The U.S. offered to assist the inhabitants in the An Lao valley to leave the valley and escape from PAVN/VC rule and 4,500 of 8,000 occupants did so. The U.S. reported that 3,000 people were moved by U.S. helicopter, the others leaving the valley on foot. [ 7 ] : 209 The Kim Son Valley consisted of seven small river valleys about 15 miles (24 km) southwest of Bồng Sơn. Three American battalions were deployed to the valley. On 11 February the 1st Cavalry established ambush positions in the highlands at the exits to each of the valleys and on 12 February began a sweep up the valley and outward, hoping to catch the PAVN/VC as they retreated. Initially unsuccessful, over the next few days the number of enemy dead slowly mounted as the result of over a dozen clashes with the Americans. On the morning of 15 February a platoon from Company B, 2/7th Cavalry, came under small-arms and mortar fire while patrolling about 4 km southeast of Firebase Bird , near the valley center. Captain Diduryk, the company commander, initially estimated that the opposing force was no larger than a reinforced platoon, but it soon became apparent that he had bumped into at least two companies occupying a 300m long position running along a jungled streambank and up a hillside. Intelligence later identified the force as part of the VC 93rd Battalion, 2nd Regiment. Fire from Company B's mortar platoon, from helicopter gunships and Skyraiders and from artillery at Firebase Bird pounded the PAVN, then Diduryk's men attacked. One platoon fixed bayonets and charged the dug-in defenders across the stream. A second pushed north to block an escape route, and a third stayed in reserve. Unnerved by the frontal assault, the VC retreated in disorder. Many stumbled into the open and were quickly killed. Those who survived fled to the north, where they came within range of the waiting platoon. A smaller group attempted to escape southward but came under fire from the reserve platoon, which took many prisoners, including 93rd Battalion commander Lt. Col. Dong Doan who inadvertently provided his interrogators with enough information to identify the locations of both his regiment and its headquarters. During the fight Company B killed 59 VC and possibly another 90 for the loss of two killed. [ 7 ] : 209–10 On 16 February Kinnard decided to replace Colonel Moore's brigade with Col. Elvy B. Roberts' 1st Brigade . The next day, the 1st and 2nd Battalions, 7th Cavalry, returned to Camp Radcliff, while 1/12th Cavalry remained behind to join 1st Battalion, 8th Cavalry Regiment and 2/8th Cavalry. Together, the three battalions combed the area around Firebase Bird, but the PAVN/VC remained in hiding. Frustrated, on 22 February Roberts changed the direction of the hunt, dispatching 1/12th Cavalry to search Go Chai Mountain, 14 km east of Bird and 7 km west of Highway 1. During the afternoon of 23 February 1/12th Cavalry met an estimated PAVN company, probably from the 7th Battalion, 12th Regiment. They maintained contact until dark, but then the PAVN escaped. Operations in the area continued until the 27th, but when nothing more of substance occurred, Kinnard decided to abandon the Kim Son Valley. That evening he attached two battalions from 1st Brigade to 2nd Brigade and returned the 1st's command group and 1/12th Cavalry to Camp Radcliff. In all, the 1st Brigade had accounted for up to 160 PAVN/VC killed while losing 29 of its own men. [ 7 ] : 210–1 While the 1st and 3rd Brigades were patrolling the Kim Son Valley between 11 and 27 February, Colonel William R. Lynch's 2nd Brigade closed down operations north of the Lai Giang and transferred his command post to Landing Zone Pony just east of the valley. The move was triggered by Colonel Doan's revelation that the 2nd Regiment was operating in the mountains southeast of Pony, information that seemed to be confirmed when radio intercepts indicated the presence of a major PAVN/VC headquarters there. On 16 February Lynch began a block and sweep of the suspected terrain. Lt Col. Meyer's 2nd Battalion, 5th Cavalry Regiment , set up three blocking positions: Recoil, roughly 6 km east of the Kim Son Valley; Joe, 4 km southwest of Recoil; and Mike, just over 2 km north of Recoil. The sweep force, 1/5th Cavalry, plus a battery of the 1st Battalion, 77th Artillery Regiment , helicoptered to Landing Zone Coil approximately 6 km northeast of Recoil. 2/12th Cavalry remained near Pony as a reserve. At 06:30, on 17 February, the battery at Coil began pounding the area between Coil and Recoil. As the barrage lifted, two companies of 1/5th Cavalry moved off towards the three blocking positions. One of the companies moved out to establish a fourth blocking position east of Recoil, but before the men had gone more than a kilometer they were engulfed by fire from upslope. After calling in air strikes and artillery, Meyer directed one of his rifle companies to reinforce, but on its way it became so heavily engaged that it could not advance. Meyer then committed his third rifle company, and Colonel Lynch ordered 2/12th Cavalry to send a company as well. In the end, the cumulative weight of the American ground attack and the artillery and air strikes drove the VC from the heights, killing at least 127 VC and captured and destroyed three mortars, five recoilless rifles and a quantity of ammunition, leading Lynch to conclude that he had crushed the 2nd Regiment's heavy weapons battalion. [ 7 ] : 211–2 During the early afternoon of 18 February two platoons from Lt. Col. Ackerson's 1/5th Cavalry came under heavy fire while patrolling. With the platoons pinned down, Ackerson reinforced with two rifle companies, but fire from earthworks cut them apart, and casualties were left where they fell. At the end of the day the Americans broke contact to retrieve their dead and wounded. The troops labeled the sector where the roughest fighting had taken place the "Iron Triangle", because of its shape (not to be confused with the better-known Iron Triangle near Saigon ). The fighting continued on the 19th. Company B, 2/12th Cavalry joined Company C, 2/5th Cavalry on a sweep southwest of the Iron Triangle. When one of the companies drew fire in the morning, the other attempted to turn the enemy's flank but ran into more VC. After breaking contact and calling in artillery and air strikes, the two companies attacked, killing 36 VC and forcing the remainder to withdraw. 1/5th Cavalry, meanwhile, renewed its assault into the triangle, with two companies moving west while the third blocked. But the VC stood their ground, stalling the advance. At dark, the 1/5th Cavalry broke contact to remove their wounded. The next day, 20 February, Lynch ordered Ackerson to continue his attack. Following a morning artillery strike, one of the companies came under fire from a strongpoint no more than 100m from the scene of the previous day's fighting. The Americans pulled back and called in artillery. In the afternoon a 2/12th Cavalry unit fought a running battle that left 23 VC dead before the VC withdrew. [ 7 ] : 212 On 21 February, attacks and counterthrusts were carried out by both sides. 2/4th Cavalry and 2/12th Cavalry patrolled around their landing zones, while a platoon from 1/5th Cavalry probed the site of the previous day's combat. Once again, intense VC fire forced the Americans to withdraw. Then, having arranged for air support, Lynch pulled all of his units out of the Iron Triangle. B-52s struck the site at midmorning and again in the afternoon. A tactical air mission then dropped 300 Tear gas grenades into the area. As evening approached, two companies of 1/5th Cavalry advanced toward the triangle but stopped before entering it when darkness fell. Artillery fired over 700 rounds into the redoubt and an AC-47 gunship dropped illumination flares throughout the night. During the action a psychological operations team circled overhead in a loudspeaker plane, broadcasting the message that further resistance would be futile and dropping safe conduct passes. On 22 February, 1/5th Cavalry moved in to find bunkers, foxholes, and trenches, but no live enemy. Although 41 bodies remained at the site, blood trails, bloody bandages and discarded weapons indicated that many more had been killed or wounded. Colonel Lynch insisted that the operation would have been even more successful if the two B-52 strikes had been timed more closely together. Instead, the delay between the first and the second bombing runs had prevented mopping up operations that might have kept more of the VC from escaping. [ 7 ] : 213 During the fight in the Iron Triangle American ground and air forces had killed at least 313 VC and possibly 400 more. The Americans also estimated that the VC had suffered some 900 wounded. Following the operation, one report observed, the entire valley floor reeked with the smell of VC dead. In addition to decimating the heavy weapons battalion of the 2nd Regiment, Colonel Lynch believed that his units had inflicted heavy losses on the Regiment's headquarters and its 93rd and 95th Battalions. The cost to the 2nd Brigade was 23 killed and 106 wounded. Colonel Lynch's brigade rested for a few days before resuming operations on 25 February. Over the next three days his men exchanged fire with small groups of PAVN/VC but failed to generate significant contacts. [ 7 ] : 212 Early in the morning of 28 February a patrol from Company B, 1/5th Cavalry came under sniper fire less than 2 km south of Pony. Unable to locate the sniper position, the patrol members continued their advance. Entering the hamlet of Tan Thanh 2, they met a hail of fire and suffered 4 wounded. As they pushed deeper into the settlement, automatic weapons opened up on them. They responded with grenades and small arms but soon came under attack on the right flank by 15-20 VC, who killed eight of them within minutes and wounded a number more. As the Americans scrambled for cover, the VC emerged from hiding to strip the U.S. dead of their weapons. A relief force arrived a short while later but by then the VC were gone. [ 7 ] : 213–4 Based on prisoner interrogations, American intelligence believed that the PAVN 6th Battalion, 12th Regiment was operating in the Cay Giep Mountains 5 miles (8.0 km) east of Bồng Sơn. General Kinnard wanted to encircle and annihilate it. The ARVN 22nd Division surrounded the target area, deploying along the Lai Giang to the north, Highway 1 to the west, and the Tra O Marsh in the south, while the division's junk fleet patrolled the coast to prevent escape by sea. Colonel Lynch's 2d Brigade would conduct the attack. At 07:30 on 1 March an intense hour-long air, land and sea bombardment of intended landing zones began. When the firing stopped, the designated sweep force 2/5th Cavalry, 1/8th Cavalry and 2/8th Cavalry came in over the mountains. However the assault forces found that the bombardment had hardly dented the thick foliage, and the helicopters were unable to land. Eventually, additional air strikes opened holes in the jungle canopy wide enough to allow the men to reach the ground by scrambling down rope ladders suspended from the hovering helicopters. Once deployed, the three battalions, soon joined by 1/5th Cavalry, searched the area and found little, although an ARVN unit near the Tra O Marsh killed about 50 PAVN who were attempting to flee the dragnet. On 4 March, following word from South Vietnamese civilians that most of the PAVN had left the a rea around the end of February, Kinnard decided that the operation had run its course and over the next two days returned the 2nd Brigade to Camp Radcliff. [ 7 ] : 214 Operation Double Eagle, carried out by U.S. and South Vietnamese marines, was a complementary mission to Operation Masher in neighboring Quảng Ngãi Province adjoining Binh Dinh province to the north. Operation Double Eagle was carried out over an area of about 500 square miles (1,300 km 2 ) about 25 miles (40 km) north to south and extending as much as 20 miles (32 km) inland from the South China Sea. 6,000 regular troops and 600 guerrillas were believed to be operating within this area. U.S. Marines dedicated to the operation would number more than 5,000 plus several thousand South Vietnamese soldiers of the ARVN 2nd Division . [ 15 ] Operation Double Eagle began on 28 January with the largest amphibious assault of the Vietnam War and the largest since the Korean War. [ 16 ] Bad weather hampered the early days of the operation, but the Marines pushed slowly inland. The plan was for the Marines to push southward into Binh Dinh province where they would meet the 1st Cavalry advancing northward in Operation Masher, trapping PAVN/VC forces between them. In reality, the Marines found few PAVN/VC soldiers in their operating area, the main force PAVN regiments having withdrawn from the area a few days prior to the amphibious landing. The Marines claimed to have killed 312 PAVN/VC soldiers and captured 19 at a loss of 24 Marines killed. [ 15 ] : 23–4 Marine Corps Commandant General Victor Krulak later said that Operation Double Eagle had failed because the PAVN and VC had been forewarned. He also said that Operation Double Eagle was a failure because it showed the people of the region that the Marines "would come in, comb the area and disappear; whereupon the VC would resurface and resume control." [ 15 ] : 35–6 Operation Masher was carried out in heavily populated rural areas. The fighting resulted in the displacement, voluntary or involuntary, of a large number of people. [ 8 ] : 180 The 1st Cavalry listed as a success of the operation that "140,000 Vietnamese civilians volunteered to leave their hamlets in the An Lao and Son Long valleys to return to GVN control." [ 11 ] The "voluntary" nature of the departure or flight of many of the civilians from their land is questionable. Operation Masher demonstrated that a consequence of large unit military operations and heavy utilization of artillery and aerial bombardment was the generation of refugees from the fighting and, inevitably, civilian casualties. The U.S. evacuated thousands of civilians by helicopter from combat areas and more thousands walked out to safety in the larger towns near the coast. The 1st Cavalry counted more than 27,000 people displaced by the operation. While many people fled the fighting, others remained for fear that if they abandoned their homes, the VC would confiscate their land and redistribute it to more dedicated supporters. [ 8 ] : 203–5 Although the U.S. Army maintained that the refugees were fleeing communism, an Army study in mid-1966 concluded that U.S. and South Vietnamese bombing and artillery fire, in conjunction with ground operations, were the immediate and prime causes of refugee movement into South Vietnamese government controlled cities and coastal areas. The U.S. considered that meeting the humanitarian needs of refugees was the responsibility of South Vietnam, but the response of the South Vietnamese government was often deficient. [ 17 ] An American journalist visited a camp housing 6,000 refugees from Operation Masher a week after their displacement. He found them packed 30 to a room, receiving inadequate food and medical treatment for diseases and wounds, and in a sullen and depressed mood. [ 8 ] : 204–5 Operation Masher-White Wing was considered a success by the Americans, demonstrating the capability of the helicopter-borne 1st Cavalry to conduct a sustained campaign against PAVN and VC forces and "to find, fix, and finish" the enemy. The U.S., as it had in the earlier Battle of Ia Drang, relied on the massive use of firepower. 171 B-52 strikes hit suspected PAVN/VC positions and 132,000 artillery rounds were expended—100 for each PAVN/VC soldier killed. In addition, tactical air support was provided by 600 sorties by fixed-wing aircraft. [ 17 ] : 222 [ 8 ] : 202–3 228 1st Cavalry soldiers were killed and another 46 died in an airplane crash; 834 were wounded. 24 U.S. Marines were killed and 156 wounded in Operation Double Eagle and several additional Americans from other units were killed. 11 ROK were reported killed; South Vietnamese casualties are not known. The U.S. claimed to have killed 1,342 PAVN/VC. The ARVN and ROK forces reported they had killed an additional 808 PAVN/VC. Further claims of 300-600 PAVN/VC were taken prisoner and 500 defected and an additional 1,746 were estimated killed. 52 crew-served weapons and 202 individual weapons were captured or recovered. [ 7 ] : 214–5 The PAVN claimed victory, stating that the 3rd Division had eliminated more than 2,000 enemy troops (killed, wounded or captured). [ 3 ] :chapter 4 An unknown number of people killed were civilians, and under the standard operating rules at the time those who did not 'voluntarily' leave free-fire zone were generally regarded as VC. [ 10 ] Total number of civilians killed is largely unknown, but one estimate was that there were 6 civilians casualties for every VC. The US called these allegations exaggerated and blamed the VC for many deaths because of tactics which endangered civilians such as recruiting civilians and firing from populated areas. [ 5 ] These issues were raised in the Fulbright Hearings. [ 5 ] ROK troops of the Capital Division were alleged to have killed over 1,000 civilians in the Bình An/Tây Vinh massacre . [ 12 ] [ 4 ] Despite this operation being the biggest search-and-destroy operation in the war up to that point, most of the PAVN/VC forces had slipped away and re-appeared in the region a few months later. [ 10 ] An estimated 125,000 people within the Binh Dinh province had lost their homes as a result of Operation Masher/White Wing. [ 10 ] The positive results cited by the Americans appear to have been only transitory. The 1st Cavalry cited among the favorable consequences of Operation Masher that it had give the local population "a chance to be freed from VC domination by moving to areas which are under government control" and stated that the South Vietnamese government "intends to reestablish civil government in the area." PAVN/VC influence, however, continued to be extensive in Binh Dinh province. Two months later, in Operation Crazy Horse , the 1st Cavalry was back sweeping part of the same area covered by Operation Masher and in October 1966 Operation Thayer began an extended effort by the 1st Cavalry once again to "fully pacify" Binh Dinh province. [ 11 ] A Joint Chiefs of Staff memo reported by The Wall Street Journal in 1966 urged President Johnson to "expand" the use of non-lethal chemicals in South Vietnam. The use of 3-Quinuclidinyl benzilate or Agent BZ was alleged in Operation White Wing by journalist Pierre Darcourt in L'Express news magazine. The allegation concerned an offensive and the 1st Cavalry Division in March 1966 during Operation "White Wing." [ 18 ] [ 19 ]
https://en.wikipedia.org/wiki/Operation_Masher
Operation Pacer HO was a 1977 operation of the U.S. Air Force that incinerated the Agent Orange stored at Johnston Atoll aboard the Dutch-owned ship M/T Vulcanus in 1977. "HO" was an abbreviation of Herbicide Orange (HO). [ 1 ] Operation Pacer IVY (InVentorY) was an associated United States Department of Defense mission to inventory, collect, consolidate, re-drum, remove from the Southeast Asian theater, and store Agent Orange. Disposal of the Herbicide Orange under Operation Pacer HO was to begin in the fall of 1974, but because of various delays by the United States Environmental Protection Agency (EPA) and Air Force budget limitation, disposal was postponed until the fall of 1976. Work was then completed on the drum crusher and work area at Johnston Atoll for the transfer of HO from 55 U.S. gal (210 L; 46 imp gal) drums to an R-5 refueler truck, and later for transfer to the incinerator ship. The redrumming activity began on September 30, 1974. As a part of the effort to dispose of the HO stored at Johnston Atoll and Gulfport, Mississippi , an attempt was made to filter out the 2,3,7,8-Tetrachlorodibenzodioxin (TCDD), using filters of coconut charcoal so that the Agents could be re-used or re-sold. The twelve cylindrical filters used at Gulfport, Mississippi, contained approximately 13 g (0.46 oz) of the contaminant TCDD. They were transferred to Johnston Atoll on December 8, 1976, and were stored in Bunker 785 while awaiting final disposition. While the TCDD was successfully removed, the resultant filters created a disposal problem beyond current technology. [ 2 ] On April 26, 1977, the EPA issued a research permit to burn the 15,000 55-gallon drums (825,000 U.S. gal (3,120,000 L; 687,000 imp gal)) of HO from Gulfport, Mississippi, during July 1977. Modification of the redrumming facility, installation of needed utilities and communications, and requisitioning/positioning of logistics support (i.e., R-5 refuelers, forklifts, personnel protective equipment) were accomplished in May and June in preparation for the re-drumming operation. [ 2 ] From May to June 1977, Air Force personnel from the five Combat Logistics Support Squadrons (CLSS) on Temporary Duty at the U.S. Naval Construction Battalion (Seabee) Base at Gulfport, Mississippi transferred 800,000 U.S. gal (3,000,000 L; 670,000 imp gal) of Herbicide Orange from the stored drums to rail tank cars, which were subsequently transferred to the Vulcanus at the dock. The Vulcanus , with its crew of 18 foreign nationals and the load of HO from Gulfport, Mississippi, arrived at Johnston Atoll on July 10, 1977. The monitoring equipment that had been airlifted to Johnston Atoll from the TRW Corporation at Redondo Beach, California, was immediately installed. Food, fresh water, and 30,985 U.S. gal (117,290 L; 25,800 imp gal) of diesel fuel were loaded from Johnston Atoll stocks. The Vulcanus sailed for the burn site (15°45'-17°45' N longitude, 171°30'-173°30' W latitude) with seven monitors and one EPA representative as passengers. Incineration began at 0030Z ( Zulu time ) July 15, 1977. [ 2 ] A special airlift mission was flown on July 21, 1977 in support of the operation. It flew from Hickam AFB to Johnston Atoll with a special seat configuration for 80 passengers and brought 61 new employees to perform the de-drumming. Additionally, 29 personnel who were already on Johnston Atoll under contract were used for the de-drumming phase of the operation. The Vulcanus finished incinerating the Gulfport, Mississippi, HO on July 24, 1977, and docked at Johnston Atoll at 0130Z July 26, 1977. A second special airlift mission departed Johnston Atoll 1615Z July 26, 1977, with the exhaust samples taken from the first burn. Its destination was Wright Patterson AFB , Ohio, where Wright State University analyzed the samples to determine the efficiency of the destruction of the TCDD in the HO. [ 2 ] In the interim and following two days of debriefings, EPA representatives granted permission on July 27, 1977, to proceed with the de-drumming of the HO stored at Johnston Atoll. This authorization specified that only half of the capacity of the Vulcanus could be loaded without a formal go-ahead from the EPA, because if the data from the first burn did not meet EPA specifications, the second half of the ship would have to be loaded with diesel fuel and a burn of 50 percent HO and 50 percent diesel would have to be conducted. During the first burn, the incinerator was extinguished by an unknown liquid at which time a cloud of exhaust plume engulfed the ship. To ensure no harm occurred to the crew or monitors, complete physicals were given to 26 people at the Johnston Atoll dispensary while the ship was being loaded for the second burn. [ 2 ] Based on the analysis of the exhaust samples from the first burn, a permit was issued on August 4, 1977, authorizing incineration of the remaining HO at Johnston Atoll. Loading of the second half of the HO on the Vulcanus was completed and it sailed at 1830Z August 6, 1977, with the second burn beginning at 0900Z August 7, 1977. A total of 30,875 U.S. gal (116,870 L; 25,709 imp gal) of diesel was loaded for this trip. When the second burn was completed, the Vulcanus returned to Johnston Atoll at 1830Z August 17, 1977. [ 2 ] The loading of the final drums of herbicide was completed on 1920Z August 23, 1977, a total of 24,795 drums had been loaded by that time. The Vulcanus sailed for the third burn with final incineration beginning at 1800Z August 24, 1977. A total of 24,170 U.S. gal (91,500 L; 20,130 imp gal) of diesel fuel was provided by Johnston Atoll. The third burn was completed at 2150Z September 3, 1977, and the Vulcanus returned to Johnston Atoll the next day. [ 2 ] The Vulcanus sailed out one more time from September 6–8, 1977, to burn the diesel fuel which had been used to rinse any residual HO from its holding tanks and to discharge the sea water which had also been used to rinse the tanks. A total of 11,716 U.S. gal (44,350 L; 9,756 imp gal) of diesel was provided for this voyage. The cleanup of the storage area and disposal of the dunnage on which the drums had been stored was completed on September 12, 1977. [ 2 ] This article incorporates public domain material from websites or documents of the United States government .
https://en.wikipedia.org/wiki/Operation_Pacer_HO
Operation Ranch Hand was a U.S. military operation during the Vietnam War , lasting from 1962 until 1971. Largely inspired by the British use of chemicals 2,4,5-T and 2,4-D ( Agent Orange ) during the Malayan Emergency in the 1950s, it was part of the overall herbicidal warfare program during the war called "Operation Trail Dust". Ranch Hand involved spraying an estimated 19 million U.S. gallons (72,000 m 3 ) of defoliants and herbicides [ 1 ] over rural areas of South Vietnam in an attempt to deprive the Viet Cong of food and vegetation cover. Areas of Laos and Cambodia were also sprayed to a lesser extent. According to the Vietnamese government, the chemicals caused 400,000 deaths. [ 2 ] The United States government has described these figures as "unreliable". [ 3 ] Nearly 20,000 sorties were flown between 1961 and 1971. [ citation needed ] The "Ranch Handers" motto was "Only you can prevent a forest" [ 1 ] – a take on the popular U.S. Forest Service poster slogan of Smokey Bear . During the ten years of spraying, over 5 million acres (20,000 km 2 ) of forest and 500,000 acres (2,000 km 2 ) of crops were heavily damaged or destroyed. Around 20% of the forests of South Vietnam were sprayed at least once. [ 4 ] The herbicides were sprayed by the U.S. Air Force flying C-123s using the call sign "Hades". The planes were fitted with specially developed spray tanks with a capacity of 1,000 U.S. gallons (4 m 3 ) of herbicides. A plane sprayed a swath of land that was 80 m (260 ft) wide and 16 km (9.9 mi) long in about 4½ minutes, at a rate of about 3 U.S. gallons per acre (3 m 3 /km 2 ). [ 5 ] Sorties usually consisted of three to five aircraft flying side by side. 95% of the herbicides and defoliants used in the war were sprayed by the U.S. Air Force as part of Operation Ranch Hand. The remaining 5% were sprayed by the U.S. Chemical Corps , other military branches, and the Republic of Vietnam using hand sprayers, spray trucks, helicopters and boats, primarily around U.S. military installations. [ 6 ] The herbicides used were sprayed at up to 50 times the concentration than for normal agricultural use. [ citation needed ] The most common herbicide used was Herbicide Orange, more commonly referred to as Agent Orange : a fifty-fifty mixture of two herbicides 2,4-D (2,4-dichlorophenoxyacetic acid) and 2,4,5-T (2,4,5-trichlorophenoxyacetic acid) manufactured for the U.S. Department of Defense primarily by Monsanto Corporation and Dow Chemical . The other most common color-coded Ranch Hand herbicides were Agent Blue ( cacodylic acid ) that was primarily used against food crops, and Agent White which was often used when Agent Orange was not available. The Agents used are known as the Rainbow Herbicides with their active ingredients, and years used were as follows: [ 7 ] The herbicides were procured by the U.S. military from Dow Chemical Company (all but Agent Blue), Monsanto (Agent Orange, Agent Purple, and Agent Pink), Hercules Inc. (Agent Orange and Agent Purple), Thompson-Hayward Chemical Company (Agent Orange and Agent Pink), Diamond Alkali /Shamrock Company (Agent Orange, Agent Blue, Agent Purple, and Agent Pink), United States Rubber Company (Agent Orange), Thompson Chemicals Corporation (Agent Orange and Agent Pink), Agrisect Company (Agent Orange and Agent Purple), Hoffman-Taff Inc. (Agent Orange), and the Ansul Chemical Company (Agent Blue). [ 8 ] In April 1967, the entire American domestic production of 2,4,5-T was confiscated by the military; foreign sources were also tapped into, including the Imperial Chemical Industries (ICI). [ 15 ] 65% of the herbicides used contained 2,4,5-trichlorophenoxyacetic acid that was contaminated with 2,3,7,8-tetrachlorodibenzodioxin , [ 6 ] a " known human carcinogen ... by several different routes of exposure, including oral, dermal, and intraperitoneal ". [ 16 ] About 12,000,000 U.S. gal (45,000,000 L; 10,000,000 imp gal) of dioxin -contaminated herbicides were sprayed over Southeast Asia (mainly in Vietnam, Cambodia, and Laos) during American combat operations in the Vietnam War. [ 17 ] In 2005, a New Zealand government minister was quoted and widely reported as saying that Agent Orange chemicals had been supplied from New Zealand to the United States military during the conflict. Shortly after, the same minister claimed to have been mis-quoted, although this point was less widely reported. From 1962 to 1987, 2,4,5-T herbicide had been manufactured at an Ivon Watkins-Dow plant in New Plymouth for domestic use, however it has not been proven that the herbicide had been exported for use by the U.S military in Vietnam. [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] For most of the war, Operation Ranch Hand was based at Bien Hoa Air Base (1966–1970), for operations in the Mekong Delta region where U.S. Navy patrol boats were vulnerable to attack from areas of undergrowth along the water's edge. Storage, mixing, loading, and washing areas and a parking ramp were located just off the base's inside taxiway between the Hot Cargo Ramp and the control tower. For operations along the central coast and the Ho Chi Minh trail regions, Ranch Hand operated out of Da Nang Air Base (1964–1971). Other bases of operation included Phù Cát Air Base (1968–1970), Tan Son Nhut Air Base (1962–1966), Nha Trang Air Base (1968–69), Phan Rang Air Base (1970–1972), and Tuy Hoa Air Base (1971–1972). [ 23 ] Other bases were also used as temporary staging areas for Ranch Hand . The Da Nang, Bien Hoa and Phu Cat Air bases are still heavily contaminated with dioxin from the herbicides, and have been placed on a priority list for containment and clean-up by the Vietnamese government. The first aerial spraying of herbicides was a test run conducted on 10 August 1961 in a village north of Đắk Tô against foliage. [ 24 ] : 11 Testing continued over the next year and even though there was doubt in the State Department , the Pentagon and the White House as to the efficacy of the herbicides, Operation Ranch Hand began in early 1962. Individual spray runs had to be approved by President John F. Kennedy until November 1962, when Kennedy gave the authority to approve most spray runs to the Military Assistance Command, Vietnam and the U.S. Ambassador to South Vietnam . Ranch Hand was given final approval to spray targets in eastern Laos in December 1965. [ 24 ] : 45–68 The issue of whether or not to allow crop destruction was under great debate due to its potential of violating the Geneva Protocol . [ 25 ] However, American officials pointed out that the British had previously used 2,4,5-T and 2,4-D (virtually identical to America's use in Vietnam) on a large scale throughout the Malayan Emergency in the 1950s in order to destroy bushes, crops, and trees in effort to deny communist insurgents the cover they needed to ambush passing convoys. [ 26 ] Indeed, Secretary of State Dean Rusk told President Kennedy on 24 November 1961, that "[t]he use of defoliant does not violate any rule of international law concerning the conduct of chemical warfare and is an accepted tactic of war. Precedent has been established by the British during the emergency in Malaya in their use of aircraft for destroying crops by chemical spraying." [ 27 ] The president of South Vietnam, Ngo Dinh Diem began to push the U.S. Military Advisory Group in Vietnam and the White House to begin crop destruction in September 1961, but it was not until October 1962 when the White House gave approval for limited testing of Agent Blue against crops in an area believed to be controlled by the Viet Cong. [ citation needed ] Soon after, crop destruction became an integral part of the Ranch Hand program. Targets for the spray runs were carefully selected to satisfy the strategic and psychological operations goals of the U.S. and South Vietnamese military. Spray runs were surveyed to pinpoint the target area and then placed on a priority list. Due to the low altitude (ideally 150 ft (46 m) required for spraying, the C-123s were escorted by fighter aircraft or helicopter gunship that would strafe or bomb the target area in order to draw out any ground fire if the area was believed to be 'hot'. Spray runs were planned to enable as straight a run as possible to limit the amount of time the planes flew at low altitude. Data on the spray runs, their targets, the herbicide used and amount used, weather conditions and other details were recorded and later put into a database called the Herbicide Reporting System (HERBS) tapes. The effectiveness of the spraying was influenced by many factors including weather and terrain. Spray runs occurred during the early morning hours before temperatures rose above 85 °F (29 °C) and the winds picked up. Mangroves in the Delta region required only one spraying and did not survive once defoliated, whereas dense forests in the uplands required two or more spray runs. Within two to three weeks of spraying, the leaves would drop from the trees, which would remain bare until the next rainy season. In order to defoliate the lower stories of forest cover, one or more follow-up spray runs were needed. About 10 percent of the trees sprayed died from a single spray run. Multiple spraying resulted in increased mortality for the trees, as did following up the herbicide missions with napalm or bombing strikes. [ 28 ] The use of herbicides in the Vietnam War was controversial from the beginning, particularly for crop destruction. The scientific community began to protest the use of herbicides in Vietnam as early as 1964, when the Federation of American Scientists objected to the use of defoliants. [ 29 ] The American Association for the Advancement of Science (AAAS) issued a resolution in 1966 calling for a field investigation of the herbicide program in Vietnam. [ 29 ] In 1967, seventeen Nobel laureates and 5,000 other scientists signed a petition asking for the immediate end to the use of herbicides in Vietnam. [ 29 ] In 1970, AAAS sent a team of scientists—the Herbicide Assessment Commission (HAC), consisting of Matthew Meselson, Arthur Westing , John Constable, and Robert Cook—to conduct field tests of the ecological impacts of the herbicide program in Vietnam. [ 29 ] A 1969 study by the Bionetics Research Laboratory found that 2,4,5-T could cause birth defects and stillbirths in mice. The U.S. government suspended the military use of 2,4,5-T in the U.S. in April 1970. [ 29 ] Sporadic crop destruction sorties using Agent Blue and Agent White continued throughout 1970 until the final Ranch Hand run was flown on 7 January 1971. [ 29 ] The use of herbicides as a defoliant had long-term destructive effects on the people of Vietnam and their land and ecology, [ 30 ] [ 31 ] as well as on those who fled in the mass exodus from 1978 to the early 1990s. According to the Vietnamese government, the US program exposed approximately 4.8 million Vietnamese people to Agent Orange, resulting in 400,000 deaths due to a range of cancers and other ailments. [ 2 ] Hindsight corrective studies indicate that previous estimates of Agent Orange exposure were biased by government intervention and under-guessing, such that current estimates for dioxin release are almost double those previously predicted. [ 32 ] According to the Vietnamese Government, census data indicates that the United States military directly sprayed upon millions of Vietnamese during strategic Agent Orange use. [ 32 ] According to the Vietnamese government, the program caused three million Vietnamese health problems, with 150,000 children born with severe birth defects, [ 33 ] and 24% of the area of Vietnam being defoliated. The Red Cross of Vietnam estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. [ 34 ] The United States government has described these figures as "unreliable". [ 3 ] According to the Department of Veteran Affairs, 2.6 million U.S. military personnel were exposed and hundreds of thousands of veterans are eligible for treatment for Agent Orange-related illnesses. [ 34 ] [ 35 ] [ 36 ]
https://en.wikipedia.org/wiki/Operation_Ranch_Hand
Operation Red Hat was a United States Department of Defense movement of chemical warfare munitions from Okinawa , Japan to Johnston Atoll in the North Pacific Ocean , which occurred in 1971. U.S. chemical weapons were brought into Okinawa in 1962 based on the recommendation of Secretary of Defense Robert McNamara , according to declassified documents. In 1970, U.S. Defense Secretary Melvin Laird met Japan Defense Agency chief Yasuhiro Nakasone , and said that the United States had received information that North Korea had a supply of chemical weapons. [ citation needed ] The move of U.S. chemical weapons to Okinawa in 1962 was meant to serve as a deterrent. The Red Hat code name was assigned by the Assistant Chief of Staff for Intelligence , Department of the Army , on November 12, 1962, during the planning to deploy chemical agents to the 267th Chemical Platoon on Okinawa . [ 1 ] The 267th Chemical Platoon (Service) was activated on Okinawa on December 1, 1962 at Chibana Ammunition Depot. The depot was a hill-top installation next to Kadena Air Base . [ 2 ] During this deployment, "Unit personnel were actively engaged in preparing RED HAT area, site 2 for the receipt and storage of first increment items, [shipment] "YBA", DOD Project 112 ." The company received further shipments, code named YBB and YBF, which according to declassified documents also included sarin, VX , and mustard gas. [ 3 ] By 1969, according to later newspaper reports, there was an estimated 1.9 million kg (1,900 metric tons) of VX stored on Okinawa. [ 2 ] In 1969, over 20 personnel (23 U.S. soldiers and one U.S. civilian , according to other reports) were exposed to low levels of the nerve agent sarin while sandblasting and repainting storage containers. [ 2 ] The resultant publicity appears to have contributed to the decision to move the weapons off Okinawa. Chemical agents were stored in the high security Red Hat Storage Area (RHSA) which included hardened igloos in the weapon storage area , the Red Hat building (#850), two Red Hat hazardous waste warehouses (#851 and #852), an open storage area, and security entrances and guard towers. The US government directed relocation of chemical munitions from Okinawa to Johnston Atoll in 1971. An official U.S. film on the mission says that "safety was the primary concern during the operation", though Japanese resentment of U.S. military activities on Okinawa also complicated the situation. At the technical level, time pressures imposed to complete the mission, the heat, and water rationing problems also complicated the planning. [ 1 ] The initial phase of Operation Red Hat involved the movement of chemical munitions from a depot storage site to Tengan Pier, eight miles away, and required 1,332 trailers in 148 convoys. The second phase of the operation moved the munitions to Johnston Atoll. [ 4 ] The Army leased 41 acres (170,000 m 2 ) on Johnston Atoll. Phase I of the operation took place in January and moved 150 tons of distilled mustard (HD), a blister agent chemically identical to mustard agent (H), manufactured by either the Levinstein or Thiodiglycol processes, but purified further so that it can be stored longer before polymerizing. The USNS Lt. James E. Robinson (T-AK-274) arrived at Johnston Atoll with a load of HD projectiles on January 13, 1971. [ citation needed ] Phase II completed cargo discharge to Johnston Atoll with five moves of the remaining 12,500 tons of munitions, in August and September 1971. They arrived in the following order: USNS Sea Lift (T-LSV-9) , USNS Private Francis X. McGraw (T-AK-241) , USNS Miller, USNS Sealift, USNS Pvt McGraw. [ 5 ] The USS Grapple (ARS-7) , under the command of a Captain Pilcher, was part of Operation Red Hat. Units operating under United States Army Ryukyu Islands (USARYIS) were 2nd Logistical Command and the 267th Chemical Company , the 5th and 196th Ordnance Detachments (EOD), and the 175th Ordnance Detachment. Originally, it was planned that the munitions be moved to Umatilla Chemical Depot but this never happened due to public opposition and political pressure. [ 6 ] The United States Congress passed legislation on January 12, 1971 (PL 91-672) that prohibited the transfer of nerve agent , mustard agent , Agent Orange and other chemical munitions to all 50 U.S. states. [ 7 ] The 1971 weapons transfer voyages transported the chemical agents that became the first stockpile at Johnston Atoll. The chemical weapons brought from Okinawa included nerve and blister agents contained in rockets, artillery shells, bombs, mines, and one-ton (900 kg) containers. In 1985 the U.S. Congress mandated that all chemical weapons stockpiled at Johnston Atoll, mostly mustard gas, Sarin, and VX gas, be destroyed. [ 8 ] Prior to the beginning of destruction operations, Johnston Atoll held about 6.6 percent of the entire U.S. stockpile of chemical weapons . [ 9 ] The Johnston Atoll Chemical Agent Disposal System (JACADS) was built to destroy all the chemical munitions on the island. [ 10 ] The first weapon disposal incineration operation took place on June 30, 1990. Transition from the testing phase to full-scale operations began in May 1993, and in August full-scale operations began. Twice, in 1993 and 1994, the facility had to be evacuated because of hurricanes ; operations were delayed for as long as 70 days during these periods. [ 11 ] On December 9, 1993, a spill of about 500 pounds (226 kg) of Sarin (Agent GB) occurred inside the Munitions Demilitarization Building. No sarin leaked beyond the building and the contingency plan was not activated. JACADS suspended incineration of munitions until investigation of the incident was satisfactorily completed. [ 11 ] The last munitions were destroyed in 2000. Media related to Operation Red Hat at Wikimedia Commons
https://en.wikipedia.org/wiki/Operation_Red_Hat
Operation Sandcastle was a United Kingdom non-combat military operation conducted between 1955–1956. Its purpose was to dispose of chemical weapons by dumping them in the sea. [ 1 ] The British possessed almost 71,000 air-dropped bombs of 250 kilograms in weight, each of which was filled with tabun . These had been seized from German ammunition dumps during the final months of World War II . A total of 250,000 tons of German chemical weapons had been discovered, the majority of which were destroyed because they comprised warfare agents which the allies already possessed in great abundance e.g. mustard gas at sites such as RAF Bowes Moor . [ 2 ] However, the stocks of tabun and sarin were considered more valuable because the allies did not possess nerve agent technology at that time. As a result, captured stocks of German nerve agents were divided between Britain and the United States after discussion, with the Americans taking the sarin. The British transferred their 14,000 tons of ordnance containing tabun in October 1945, via Hamburg and Newport , to temporary storage at the RAF strategic reserve ammunition store at Llanberis . Longer term facilities were prepared at RAF Llandwrog where the bombs were to be stored in stacks, out in the open, on the runways of the disused airfield. The intention was that any leaks of nerve agent would be dispersed by the prevailing winds. The bombs were transported to Llandwrog by truck from August 1946 to July 1947. In July 1947 it was discovered that the bombs were fuzed and a number of them were leaking nerve agent. The fact that the bombs had fuzes inserted meant that they were inherently unsafe: to reduce the risk of accidental detonation, standard practice is to avoid installing the fuze in any air-dropped bomb until shortly before it is loaded onto an aircraft to be used in combat. For similar reasons bomb fuzes are always stored separately, well away from bombs. This was not the case with the 250 kilogram tabun bombs at RAF Landwrog. Not only had the bombs been left with fuzes inserted for a considerable amount of time (possibly years), but they were also left exposed to the elements creating a corrosion risk, together with the inevitable temperature fluctuations which resulted from changing weather. None of these factors was accepted practice regarding the safe, long-term storage of bomb fuzes or explosive ordnance in general. At a rate of 500 bombs a week they were defuzed and individually coated in a waxy preservative to seal them. Seventy-two irreparable devices were neutralised on-site by being drained into individual pits filled with caustic soda crystals. Despite being given a preservative covering the bombs continued to suffer in the damp Welsh climate and in 1951 twenty-one Bellman hangars were erected on the site to store the bombs. Finally in June 1954 it was decided to dispose of the entire stock because by then it was recognised that not only did the weapons have no military value but they had actually become a liability, which could only become worse as time passed. Operation Sandcastle was divided into two sections, a sea voyage to Cairnryan and then a transfer to suitable hulks there for later sinking north-west of Ireland beyond the continental shelf . It was intended to process 16,000 bombs in the first attempt in mid-1955. The work began with the construction of a road between Llandwrog and the nearby port of Fort Belan where six tank landing craft were assembled. Loading trials in June indicated only 400 bombs could be loaded on each craft, fewer than hoped. It was then decided to remove the tail-fins from the bombs to reduce their length, and to pack them in new boxes. This work increased each craft's load to 800 bombs and by mid-July all 16,000 devices had been safely carried to Cairnryan. The SS Empire Claire was the first scuttling ship. Its loading began in late June, and by 23 July all 16,000 bombs were aboard, although an ill-considered loading plan had given it a noticeable list to starboard. The three scuttling charges of TNT were positioned to ensure its sinking would be steady and flat, and the nine-man crew embarked. Departure was delayed by industrial action on the Firth of Clyde preventing the departure of the ocean-going tugboat Forester . On 25 July 1955 the SS Empire Claire , SS Forester , and navy escorts Mull and Sir Walter Campbell left Cairnryan. The Empire Claire soon broke down and was taken under tow. They reached the scuttling point ( 56°30′00″N 12°00′00″W  /  56.50000°N 12.00000°W  / 56.50000; -12.00000 ) in the early morning of 27 July, but waited until 10:00am for the arrival of an RAF photo-reconnaissance aircraft to observe the operation. The initial two scuttling charges blew and dramatically increased the vessel's starboard list, forcing the use of the emergency charge to open its stern and cause it to sink rapidly, bows up, to a depth of around 2,500 metres (8,200 ft). The later sinking went without any problems. MV Vogtland was scuttled on 30 May 1956 at the same site, taking 28,737 bombs with it, and on 21 July 1956 the SS Kotka was sunk (at 56°31′00″N 12°05′00″W  /  56.51667°N 12.08333°W  / 56.51667; -12.08333 ) with 26,000 bombs, 330 tons of arsenic compounds , and three tons of toxic seed dressings.
https://en.wikipedia.org/wiki/Operation_Sandcastle
Operation Steel Box , also known as Operation Golden Python (German name for the transport in Germany: Aktion Lindwurm ), was a 1990 joint U.S.–West German operation which moved over 100,000 U.S. chemical weapons from Germany to Johnston Atoll . At a United States Army Site near Clausen , West Germany , 100,000 GB and VX filled American chemical munitions were stored in 15 concrete bunkers. [ 1 ] These munitions were managed by the 330th Ordnance Company (EOD) and guarded by the 110th Military Police Company both headquartered in nearby Münchweiler an der Rodalb . The propellants for these munitions were stored in Leimen Site 67. The GB and VX munitions had undergone a refurbishment from 1980 to 1982. The weapons in this depot were scheduled to be moved due to an agreement between the United States and West Germany. The 1986 agreement, between Ronald Reagan and Helmut Kohl , provided for the removal of 155 mm and 8 inch unitary chemical projectiles. [ 2 ] The program sponsor, the Military Sealift Command, brought in the prestigious naval architecture firm, George G. Sharp, Inc. of New York City, as project manager to oversee the design-development efforts to modify and outfit the two crane ships for the mission and assigned former Electric Boat submarine engineer Jim Ruggieri, P.E., as project engineer. The vessels were outfitted with a collective protection system – or a positive pressure system used to pressurize the house relative to the cargo hold as a means of preventing inadvertent weapon gas migration in the event of a containment failure; manned Laboratories – to provide a safe and comfortable environment to scientists to perform analyses of the products; unmanned “sniffer” and alarming modules to sample cargo hold air to detect containment failures, as well as detect and alarm positive pressure system failure; power generation modules to supplement ship power and emergency power provisions, and specialized communications modules to permit coordination with security forces. Operation Steel Box began on July 26, 1990, and ended on September 22, 1990, [ 3 ] but the weapons did not reach their final destination until November. [ 1 ] [ 4 ] The move from the storage facility to an intermediate facility at Miesau utilized trucks and trains, civilian contractors, and U.S. and West German military personnel. [ 2 ] The weapons were repacked and shipped by truck from their storage facility until they reached the railway in Miesau. [ 1 ] The truck transport portion of the mission involved 28 road convoys which delivered the munitions the 30 miles from Clausen to Miesau. [ 4 ] The munitions were carried by special ammunition train from Miesau to the port of Nordenham . The train transport was well publicized and escorted by 80 U.S. and West German military and police vehicles. [ 1 ] At the port the munitions were loaded onto two modified ships, the SS Gopher State and the SS Flickertail State , [ 2 ] by the Army's Technical Escort Unit. [ 1 ] The ships were operated by the U.S. Military Sealift Command , [ 2 ] and upon leaving Nordenham they sailed for 46 straight days. [ 1 ] [ 2 ] The ships arrived at Johnston Atoll and on November 18 unloaded the last of their cargo containers. [ 1 ] Security and emergency response were both concerns during Steel Box. Besides the police and military escort for the trains, the road convoys had restricted airspace overhead. [ 1 ] Along the route, emergency response teams were on stand-by. [ 1 ] While the ships were in port U.S. Navy EOD Detachments provided underwater hull sweeps to ensure limpet mines were not attached to the ships. The 46-day trip at sea was non-stop, with refueling taking place along the route. [ 2 ] The ships were also escorted by the U.S. Navy guided missile cruiser USS Bainbridge and USS Truxtun . [ 2 ] The transport ships avoided the Panama Canal , for security reasons, [ 1 ] and took the route around Cape Horn , the tip of South America . [ 2 ] There were no reported chemical agent leaks or security breaches during the transport phase of Steel Box. [ 2 ] The 1990 shipments of nerve agents from Germany to the Johnston Atoll Chemical Agent Disposal System facility caused several South Pacific nations to express unease. [ 2 ] At the 1990 South Pacific Forum in Vanuatu , the island nations of the South Pacific indicated that their concern was that the South Pacific would become a toxic waste dumping ground. [ 5 ] Other concerns raised included the security of the shipments, which were refueled at sea and escorted by U.S. guided missile destroyers, while they were en route to Johnston Atoll. [ 2 ] In Australia, Prime Minister Bob Hawke drew criticism from some of these island nations for his support of the chemical weapons destruction at Johnston Atoll. [ 6 ]
https://en.wikipedia.org/wiki/Operation_Steel_Box
1964–1965: Viet Cong offensive and American intervention 1966 campaign 1967 campaign 1968–1969: Tet Offensive and aftermath 1969–1971: Vietnamization 1972: Easter Offensive 1973–1974: Post- Paris Peace Accords 1975: Spring offensive Air operations Naval operations Lists of allied operations Air operations Operation Tailwind was a covert incursion by a small unit of United States Army and allied Montagnard forces into southeastern Laos during the Vietnam War , conducted from 11 to 14 September 1970. Its purpose was to create a diversion for a Royal Lao Army offensive and to exert pressure on the occupation forces of the People's Army of Vietnam (PAVN). A company -sized element of US Army Special Forces and Montagnard commando ( Hatchet Force ) of the Military Assistance Command, Vietnam Studies and Observations Group (MACV-SOG or SOG) conducted the operation. Nearly 30 years later, CNN / Time magazine jointly developed an investigative report that was both broadcast and published in June 1998 about Operation Tailwind. The TV segment was produced by April Oliver, Jack Smith, Pam Hill, and others. It was narrated by Peter Arnett , noted for war reporting, who had received a 1966 Pulitzer Prize for his work from Vietnam and who had worked with CNN for 18 years. Entitled Valley of Death, the report claimed that US air support had used sarin nerve agent against opponents, and that other war crimes had been committed by US forces during Tailwind. In response the Pentagon conducted an investigation, as did CNN; the news organizations together ultimately retracted the report, and fired the producers responsible. April Oliver and Jack Smith sued CNN in a challenge of their dismissals and reached separate settlements [ 2 ] with the network. After being reprimanded by CNN, Arnett resigned from the organization. Several individuals who were sources for the reports, whose images were shown in the reports, or who were otherwise identified with the reports, brought other legal actions against CNN and Time Warner. A decision by the Ninth Circuit Court of Appeals in one of the cases states that the Tailwind reports did not defame the plaintiff who was a source for the reports. It noted that the plaintiff, in his interviews with CNN, "admitted the truth of each of the three facts he now challenges." [ 3 ] During late 1970 the overall US-supported military effort in the covert war in the Kingdom of Laos was floundering. Operation Gauntlet , a multi-battalion Royal Lao Army offensive intended to protect Paksong and the strategic Bolovens Plateau, was failing. [ 4 ] They appealed to headquarters of Military Assistance Command, Vietnam – Studies and Observations Group (MACV-SOG or SOG) in Saigon requesting aid from the highly classified unit; specifically, they asked for a unit to enter near Chavane and disrupt PAVN defenses. Colonel John Sadler, SOG's commander, agreed to undertake the mission. However, none of his cross-border reconnaissance teams had ever operated so deep in Laos, and the target area was 20 miles (30 km) beyond the unit's authorized area of operations. The mission was launched by three platoons of Command and Control Central's ( Kontum ) Hatchet Company B and two United States Air Force Pathfinder Teams. The 110 Montagnards and 16 Americans, under the command of Captain Eugene McCarley, were heli-lifted from a launch site at Dak To to a landing zone (LZ) in a valley 60 miles (97 km) to the west, near Chavane. The distance to the target was so great that the men were lifted by three United States Marine Corps (USMC) Sikorsky CH-53 Sea Stallion helicopters from HMH-463 , [ 5 ] escorted by 12 USMC and Army Bell AH-1 Cobra gunships. On the morning of the third day, the Americans overran a PAVN bivouac and killed 54 troops. They questioned why the Vietnamese had not fled the area, but members of the Hatchet Force discovered a bunker buried beneath 12 feet (3.7 m) of earth. Inside they found a huge cache of PAVN maps and documents. They had overrun the PAVN logistical headquarters that controlled all of Laotian Route 165. The forces quickly filled two footlockers with the intelligence haul and the Hatchet Force began to seek a way out. The PAVN were closing in, but McCarley dropped off elements at three separate (and smaller) landing zones, catching the PAVN unprepared. [ citation needed ] Casualties incurred during the operation amounted to three Montagnards killed in action and 33 wounded, while all 16 Americans were wounded. Two CH-53s were shot down during the operation. [ 5 ] The efforts of SOG medic Sergeant Gary Michael Rose were considered critical to the survival of many of the Hatchet Force. He was recommended for the Medal of Honor for his actions. [ 6 ] He instead received the Distinguished Service Cross . [ 7 ] This was later upgraded to the Medal of Honor, which President Donald Trump presented to him on October 23, 2017. [ 8 ] In 1998 Cable News Network (CNN) launched NewsStand CNN & Time , a collaboration with Time magazine on reporting to be both broadcast and published in print form. On 7 June 1998 a report about Operation Tailwind, entitled Valley of Death , was broadcast as the premiere episode of the new program. The segment analyzed and criticized Operation Tailwind. It alleged that US aircraft, in an unprecedented reversal of policy and breach of international treaties, had used sarin ("GB" in US/NATO nomenclature) against North Vietnamese ground troops who were attacking the landing zones during the extraction of the forces. The Pentagon did not dispute that some chemical agent was used, nor that both North Vietnamese and American soldiers struggled against its effects. However, most witnesses, sworn and unsworn, said that only a potent tear gas (most likely a CN / CS mixture) was used. According to reporting, others insisted it was sarin, or a combination of tear gas and sarin. [ 9 ] A second element of the reporting was an allegation that Operation Tailwind had been devised to eliminate a group of Americans who had defected to the enemy and were holed up in a Laotian village. According to the report, the nerve agent had been sprayed from aircraft twice: once to prep the village and once during extraction of troops. The report claimed that more than 100 Laotian men, women, and children had been killed during the attack on the village and 2 American defectors were also killed. [ 10 ] The broadcast (and the published Time magazine article of June 15) appeared to be reliably sourced. Admiral Thomas Moorer , chairman of the Joint Chiefs of Staff at the time of Tailwind, appeared to say that nerve agents had been used, and not just during this operation. However, Admiral Moorer later told investigators that he "never confirmed anything" to CNN regarding Operation Tailwind, that he had no knowledge of the use of sarin or the targeting of defectors, and he believed that producer April Oliver had asked him "trick" questions. [ 11 ] But later again, in sworn deposition testimony taken during the suit of one of the producers, Admiral Moorer reviewed April Oliver's notes of her interviews of him, including his responses to her questions. He did not make any significant objections to their accuracy. [ 12 ] Former SOG Lieutenant Robert Van Buskirk (one of the three platoon leaders) and three of the participating SOG sergeants allegedly gave information that supported the allegations as presented in the televised and published investigative report. Van Buskirk said that the Montagnard Hatchet Force was exposed on the landing zone ("LZ") when the teargas agent was deployed to drive the enemy back. He also said that he saw his men (who were not equipped with gas masks) convulsing when the wind blew the agent back upon the LZ. The CNN/Time reports suggested that war crimes had been committed. The Pentagon launched its own investigation. [ 13 ] Another piece of evidence for the usage of sarin came from the fact that at least one American involved in the operation suffered from a degenerative neurological disorder caused by exposure to nerve gas. [ 14 ] CNN and Time magazine undertook an internal investigation. New York attorney Floyd Abrams , a constitutional lawyer, was hired to conduct the investigation for them. They jointly concluded that the journalism of the report was "flawed," and the report should be publicly retracted, with apologies made to persons and institutions cited in it. The two key CNN producers of the report, April Oliver and Jack Smith, were fired outright when they refused to resign. Senior producer Pam Hill of CNN resigned. Reporter Peter Arnett was reprimanded and soon resigned, going to work for HDNet and then NBC . Abrams later said that he had urged CNN/Time Warner to retract the report, but to acknowledge that it may have had truth to it. He said, retraction "doesn't necessarily mean that the story isn't true. … Who knows? Someday we might find other information. And, you know, maybe someday I'll be back here again, having done another report saying that, 'You know what? It was all true.'" [ 15 ] In early July 1998, Tom Johnson , CNN News Group Chairman, President and CEO, issued a statement about the findings of the internal investigation. He pledged acceptance of the findings and reiterated that the allegations in Valley of Death and related reports "cannot be supported." He said there was insufficient evidence that sarin or any other deadly gas was used, nor could CNN confirm that American deserters were targeted, or whether they were at the camp in Laos. As a supplement to CNN's retraction, on July 2 and July 5, 1998, the company aired retraction broadcasts that sought to portray some of the sources for the Tailwind reports as unreliable. [ 16 ] Oliver and Smith were chastised but unrepentant. They put together a 77-page document supporting their side of the story; it included testimony from military personnel apparently confirming the use of sarin. [ 17 ] Active and retired military personnel consulted by the media, including CNN's own military analyst, USAF Major General Perry Smith (ret), noted that a particularly strong, non-lethal formulation of "CS" teargas was used during Tailwind. But they said that it should not be confused with sarin, which is categorized as a weapon of mass destruction by the United Nations . [ 18 ] Several individuals who were sources for the reports, whose images were shown in the reports, or who were otherwise identified with the reports, brought other legal actions against CNN and Time Warner. These actions were combined by the Judicial Panel for Multidistrict Litigation and were assigned to the United States District Court in the Northern District of California. They became collectively known as the "Operation Tailwind" litigation. [ 19 ] CNN and Time Warner defended its reports from claims of defamation , and most of these actions were dismissed by the court. [ 20 ] In none of these cases did the court find that the original Tailwind reports had defamed anyone. A decision by the Ninth Circuit Court of Appeals in one of the cases states that the Tailwind reports did not defame the plaintiff who was a source for the reports. It noted that the plaintiff, in his interviews with CNN, "admitted the truth of each of the three facts he now challenges." [ 3 ] The Ninth Circuit said that CNN may have subsequently defamed this source in its retraction broadcast's statement seeking to portray the source as "unreliable". The court concluded that the question of whether the source was defamed by CNN in that retraction broadcast "merits further development", and the appeals court remanded "this issue to the district court for further proceedings." [ 21 ] The HBO series The Newsroom featured a major storyline in its second season that explored the fictional ACN's coverage of "Operation Genoa". This was loosely based on CNN's coverage of Tailwind. [ 22 ]
https://en.wikipedia.org/wiki/Operation_Tailwind
Operation Top Hat was a "local field exercise" [ 1 ] conducted by the United States Army Chemical Corps in 1953. The exercise involved the use of Chemical Corps personnel to test biological and chemical warfare decontamination methods. These personnel were deliberately exposed to these contaminants, so as to test decontamination. In June 1953 the United States Army formally adopted guidelines regarding the use of human subjects in chemical, biological , or radiological testing and research. [ 1 ] The guidelines were adopted per an Army Chief of Staff memo (MM 385) and closely mirrored the Nuremberg Code . [ 1 ] These guidelines also required that all research projects involving human subjects receive approval from the Secretary of the Army . [ 1 ] The guidelines, however, left a loophole; they did not define what types of experiments and tests required such approval from the secretary, thus encouraging "selective compliance" with the guidelines. [ 1 ] Under the guidelines, seven research projects involving chemical weapons and human subjects were submitted by the Chemical Corps for Secretary of the Army approval in August 1953. [ 1 ] [ 2 ] One project involved vesicants , one involved phosgene , and five were experiments which involved nerve agents ; all seven were approved. [ 1 ] [ 2 ] Operation Top Hat, however, was not among the projects submitted to the Secretary of the Army for approval. [ 2 ] Operation Top Hat was termed a "local field exercise" by the Army and took place from September 15–19, 1953, at the Army Chemical School at Fort McClellan , Alabama. [ 1 ] [ 2 ] In a 1975 Pentagon Inspector General 's report, the military maintained Top Hat was not subject to the guidelines requiring approval because it was a "line of duty" exercise in the Chemical Corps. [ 2 ] The experiments used Chemical Corps personnel to test decontamination methods for biological and chemical weapons, [ 2 ] including mustard gas and nerve agents. [ 1 ] Chemical Corps personnel participating in the tests were not volunteers and were not informed of the tests. [ 1 ]
https://en.wikipedia.org/wiki/Operation_Top_Hat
The operation chart is a graphical and symbolic representation of the manufacturing operations used to produce a product. [ 1 ] The operation chart illustrates only the value-adding activities in the manufacturing process ; therefore, material handling and storage are not illustrated in this chart. operation chart records the overall picture of process and sequencewise steps of operations. The operations described in the operation chart are:
https://en.wikipedia.org/wiki/Operation_chart
An operational-level agreement ( OLA ) defines interdependent relationships in support of a service-level agreement (SLA). [ 1 ] The agreement describes the responsibilities of each internal support group toward other support groups, including the process and timeframe for delivery of their services. The objective of the OLA is to present a clear, concise and measurable description of the service provider's internal support relationships. OLA is sometimes expanded to other phrases but they all have the same meaning: OLA is not a substitute for an SLA. The purpose of the OLA is to help ensure that the underpinning activities that are performed by several support team components are aligned to provide the intended SLA. If the underpinning OLA is not in place, it is often very difficult for organizations to go back and engineer agreements between the support teams to deliver the SLA. OLA has to be seen as the foundation of good practice and common agreement. This business-related article is a stub . You can help Wikipedia by expanding it . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operational-level_agreement
Operational View ( OV ) is one of the basic views defined in the enterprise architecture (EA) of the Department of Defense Architecture Framework V1.5 (DoDAF) and is related with concept of operations . Under DODAF 2, which became operational in 2009, the collections of views are now termed 'viewpoints' and no longer views. Other enterprise architecture frameworks may or do have operational views. For example, the MODAF has an Operational Viewpoint and the NATO Architecture Framework has an Operational View (collection of subviews). This article will further explain the construction of the Operational View of the DoDAF V1.5. The "Operational View" (OV) in the DoDAF Enterprise architecture framework (version 1/1.5) ('Operational Viewpoint' in DODAF 2) describes the tasks and activities, operational elements, and information exchanges required to conduct operations. A pure Operational View is material independent. However, operations and their relationships may be influenced by new technologies such as collaboration technology, where process improvements are in practice before policy can reflect the new procedures. [ 1 ] The operational viewpoint provides a means to describe what is needed in a solution-free or implementation-free way. There may be some cases, however, in which it is necessary to document the way processes are performed given the restrictions of current systems, in order to examine ways in which new systems could facilitate streamlining the processes. In such cases, an Operational View may have material constraints and requirements that must be addressed. For this reason, it may be necessary to include some high-level Systems View (SV) architecture data as overlays or augmenting information onto the Operational View products. [ 1 ] The Department of Defense Architecture Framework (DoDAF) defines a standard way to organize a systems architecture into complementary and consistent views. It is especially suited to large systems with complex integration and interoperability challenges, and is apparently unique in its use of "operational views" detailing the external customer's operating domain in which the developing system will operate. The DoDAF defines a set of products that act as mechanisms for visualizing, understanding, and assimilating the broad scope and complexities of an architecture description through graphic, tabular, or textual means. These products are organized under four views: Each view depicts certain perspectives of an architecture as described below. Only a subset of the full DoDAF viewset is usually created for each system development. The figure represents the information that links the operational view, systems and services view, and technical standards view. The three views and their interrelationships driven – by common architecture data elements – provide the basis for deriving measures such as interoperability or performance, and for measuring the impact of the values of these metrics on operational mission and task effectiveness. [ 2 ] The Department of Defense Architecture Framework (DoDAF) has defined a series of seven different types of Operational View products. [ 1 ] There are seven OV products described in this section: In addition to examining behavior over time, one can also assess an overall dynamic mission cost over time in terms of human and system/network resource dollar costs and their processes dollar costs. Analysis of dollar costs in executable architectures is a first step in an architecture based investment strategy, where we eventually need to align architectures to funding decisions to ensure that investment decisions are directly linked to mission objectives and their outcomes. The figure on the right illustrates the anatomy of one such dynamic model. [ 1 ] State transitions in executable operational architectural models provide for descriptions of conditions that control the behavior of process events in responding to inputs and in producing outputs. A state specifies the response of a process to events. The response may vary depending on the current state and the rule set or conditions. Distribution settings determine process time executions. Examples of distribution strategies include: constant values, event list, constant interval spacing, normal distribution, exponential distribution, and so forth. Priority determines the processing strategy if two inputs reach a process at the same time. Higher priority inputs are usually processed before lower priority inputs. [ 1 ] Processes receiving multiple inputs need to define how to respond. Examples of responses include: process each input in the order of arrival independent of each other, process only when all inputs are available, or process as soon as any input is detected. Processes producing multiple outputs can include probabilities (totaling 100 percent), under which each output would be produced. Histograms are examples of generated timing descriptions. They are graphic representations of processes, human and system resources, and their used capacity over time during a simulation run. These histograms are used to perform dynamic impact analysis of the behavior of the executable architecture. Figure 4-23 is an example showing the results of a simulation run of human resource capacity. [ 1 ]
https://en.wikipedia.org/wiki/Operational_View
Operational availability in systems engineering is a measurement of how long a system has been available to use when compared with how long it should have been available to be used. Operational availability is a management concept that evaluates the following. [ 1 ] Any failed item that is not corrected will induce operational failure. A o {\displaystyle A_{o}} is used to evaluate that risk. Operational failure is unacceptable in any situation where the following can occur. In military acquisition, operational availability is used as one of the Key Performance Parameters in requirements documents, to form the basis for decision support analyses. [ 2 ] Aircraft systems, ship systems, missile systems, and space systems have a large number of failure modes that must be addressed with limited resources. Formal reliability modeling during development is required to prioritize resource allocation before operation begins. Estimated failure rates and logistics delay are used to identify the number of forward positioned spare parts required to avoid excessive down time. This is also used to justify the expense associated with redundancy. Formal availability measurement is used during operation to prioritize management decisions involving upgrade resource allocation, manpower allocation, and spare parts planning. Operational availability is used to evaluate the following performance characteristic. For a system that is expected to be available constantly, the below operational availability figures translate to the system being unavailable for approximately the following lengths of time (when all outages during a year are added together): The following data is collected for maintenance actions while in operation to prioritize corrective funding. This data is applied to the reliability block diagram to evaluate individual availability reduction contributions using the following formulas. Redundant items do not contribute to availability reduction unless all of the redundant components fail simultaneously. Operational availability is the overall availability considering each of these contributions.
https://en.wikipedia.org/wiki/Operational_availability
Operational calculus , also known as operational analysis , is a technique by which problems in analysis , in particular differential equations , are transformed into algebraic problems, usually the problem of solving a polynomial equation . The idea of representing the processes of calculus, differentiation and integration, as operators has a long history that goes back to Gottfried Wilhelm Leibniz . The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied. [ 1 ] This approach was further developed by Francois-Joseph Servois who developed convenient notations. [ 2 ] Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave , George Boole , Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester. Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855 [ 3 ] and by Boole in 1859. [ 4 ] This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy . At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians. Operational calculus first found applications in electrical engineering problems, for the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg , John Renshaw Carson and Vannevar Bush . A rigorous mathematical justification of Heaviside's operational methods came only after the work of Bromwich that related operational calculus with Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition). Other ways of justifying the operational methods of Heaviside were introduced in the mid-1920s using integral equation techniques (as done by Carson) or Fourier transformation (as done by Norbert Wiener ). A different approach to operational calculus was developed in the 1930s by Polish mathematician Jan Mikusiński , using algebraic reasoning. Norbert Wiener laid the foundations for operator theory in his review of the existential status of the operational calculus in 1926: [ 6 ] The key element of the operational calculus is to consider differentiation as an operator p = ⁠ d / d t ⁠ acting on functions . Linear differential equations can then be recast in the form of "functions" F (p) of the operator p acting on the unknown function equaling the known function. Here, F is defining something that takes in an operator p and returns another operator F (p) . Solutions are then obtained by making the inverse operator of F act on the known function. The operational calculus generally is typified by two symbols: the operator p , and the unit function 1 . The operator in its use probably is more mathematical than physical, the unit function more physical than mathematical. The operator p in the Heaviside calculus initially is to represent the time differentiator ⁠ d / d t ⁠ . Further, it is desired for this operator to bear the reciprocal relation such that p −1 denotes the operation of integration. [ 5 ] In electrical circuit theory, one is trying to determine the response of an electrical circuit to an impulse. Due to linearity, it is enough to consider a unit step function H ( t ) , such that H ( t ) = 0 if t < 0 , and H ( t ) = 1 if t > 0 . The simplest example of application of the operational calculus is to solve: p y = H ( t ) , which gives y = p − 1 ⁡ H = ∫ 0 t H ( u ) d u = t H ( t ) . {\displaystyle y=\operatorname {p} ^{-1}H=\int _{0}^{t}H(u)\,\mathrm {d} u=t\,H(t).} From this example, one sees that p − 1 {\displaystyle \operatorname {p} ^{-1}} represents integration . Furthermore n iterated integrations is represented by p − n , {\displaystyle \operatorname {p} ^{-n},} so that p − n ⁡ H ( t ) = t n n ! H ( t ) . {\displaystyle \operatorname {p} ^{-n}H(t)={\frac {t^{n}}{n!}}H(t).} Continuing to treat p as if it were a variable, p p − a H ( t ) = 1 1 − a p H ( t ) , {\displaystyle {\frac {\operatorname {p} }{\operatorname {p} -a}}H(t)={\frac {1}{1-{\frac {a}{\operatorname {p} }}}}\,H(t),} which can be rewritten by using a geometric series expansion: 1 1 − a p H ( t ) = ∑ n = 0 ∞ a n p − n ⁡ H ( t ) = ∑ n = 0 ∞ a n t n n ! H ( t ) = e a t H ( t ) . {\displaystyle {\frac {1}{1-{\frac {a}{\operatorname {p} }}}}H(t)=\sum _{n=0}^{\infty }a^{n}\operatorname {p} ^{-n}H(t)=\sum _{n=0}^{\infty }{\frac {a^{n}t^{n}}{n!}}H(t)=e^{at}H(t).} Using partial fraction decomposition, one can define any fraction in the operator p and compute its action on H ( t ) . Moreover, if the function 1/ F (p) has a series expansion of the form 1 F ( p ) = ∑ n = 0 ∞ a n p − n , {\displaystyle {\frac {1}{F(\operatorname {p} )}}=\sum _{n=0}^{\infty }a_{n}\operatorname {p} ^{-n},} it is straightforward to find 1 F ( p ) H ( t ) = ∑ n = 0 ∞ a n t n n ! H ( t ) . {\displaystyle {\frac {1}{F(\operatorname {p} )}}H(t)=\sum _{n=0}^{\infty }a_{n}{\frac {t^{n}}{n!}}H(t).} Applying this rule, solving any linear differential equation is reduced to a purely algebraic problem. Heaviside went further and defined fractional power of p , thus establishing a connection between operational calculus and fractional calculus . Using the Taylor expansion , one can also verify the Lagrange–Boole translation formula , e a p f ( t ) = f ( t + a ) , so the operational calculus is also applicable to finite- difference equations and to electrical engineering problems with delayed signals.
https://en.wikipedia.org/wiki/Operational_calculus
An operational context (OLC) for an operation is the external environment that influences its operation. For a mobile application, the OLC is defined by the combined hardware / firmware / software configurations of several appliances or devices, as well as the bearer of the mobiles of these units and other work position environment this person as the key stakeholder makes use of in timely, spatial and modal coincidence. This concept differs from the operating context [ how? ] and does not address the operating system of computers. The classic example is defined by the electronic leash configuration, where one mobile appliance is wirelessly tethered to another such appliance. The function of this electronic leash is to set an aural alarm with any of these two in case of unintentional leaving one of these two behind. Several suppliers offer the electronic leash solution. A new aspect has been launched with Bluetooth low energy for better economised battery life cycle. Special trimming serves for two years operation from a button cell. [ 1 ]
https://en.wikipedia.org/wiki/Operational_context
Operational continuity refers to the ability of a system to continue working despite damages, losses, or critical events. In the Human Resources and Organizational domain, including IT , it implies the need to determine the level of resilience of the system, its ability to recover after an event, and build a system that resists to external and internal events or is able to recover after an event without losing its external performance management capability. Organizational Continuity is achieved only with specific corporate planning . [ 1 ] In the material domain, it determines the need to adopt redundant systems, performance monitoring systems, and can even imply the practice to cannibalize or to remove serviceable assemblies, sub-assemblies or components from a repairable or serviceable item of equipment to install them on another, in order to keep the external system performance active. [ 2 ] Operational continuity may be referred to single systems, single individuals, up to teams or entire complex systems such as IT infrastructures , implying the ability of an organization or system to continue to provide its mission.
https://en.wikipedia.org/wiki/Operational_continuity
The Royal Observer Corps ( ROC ) was a civil defence organisation operating in the United Kingdom between October 1925 and 31 December 1995, when the Corps' civilian volunteers were stood down. (ROC headquarters staff at RAF Bentley Priory stood down on 31 March 1996). Composed mainly of civilian spare-time volunteers, ROC personnel wore a Royal Air Force (RAF) style uniform and latterly came under the administrative control of RAF Strike Command and the operational control of the Home Office . Civilian volunteers were trained and administered by a small cadre of professional full-time officers under the command of the Commandant Royal Observer Corps ; a serving RAF Air Commodore . This sub article lists and describes the instruments used by the ROC in their nuclear detection and reporting role during the Cold War period. Atomic Weapons Detection Recognition and Estimation of Yield known as AWDREY was a desk mounted automatic instrument, located at controls, that detected nuclear explosions and indicated the estimated size in megatons. Operating by measuring the intense flashes emitted by a nuclear explosion, together with a unit known as DIADEM which measured Electromagnetic Pulse (EMP), the instruments were tested daily by wholetime ROC officers and regularly reacted to the EMP from lightning strikes during thunderstorms. [ 1 ] AWDREY was designed and built by the Atomic Weapons Establishment at Aldermaston and tested for performance and accuracy on real nuclear explosions at the 1957 Kiritimati (or Christmas Island) nuclear bomb test (after being mounted on board a ship). Reports following a reading on AWDREY were prefixed with the codewords "Tocsin Bang" . The Bomb Power Indicator or BPI consisted of a peak overpressure gauge with a dial that would register when the pressure wave from a nuclear explosion passed over the post. When related to the distance of the explosion from the post this pressure would indicate the power of the explosion. Reports following a reading on the BPI were preceded by the codeword "Tocsin" . The Ground Zero Indicator , or GZI or shadowgraph , consisted of four horizontally mounted cardinal compass point pinhole cameras within a metal drum, each 'camera' contained a sheet of photosensitive paper on which were printed horizontal and vertical calibration lines. The flash from a nuclear explosion would produce a mark on one or two of the papers within the drum. The position of the mark enabled the bearing and height of the burst to be estimated. With triangulation between neighbouring posts these readings would give an accurate height and position. The altitude of the explosion was important because a ground or near ground burst would produce radioactive fallout, whereas an air burst would produce only short distance and short lived initial radiations (but no fallout). The Radiac Survey Meter No 2 or RSM was a 1955-meter which used an ionisation chamber to measure gamma radiation, it could measure beta, by removing the base-plate and opening the beta shield. This meter suffered from a number of disadvantages: it required three different types of batteries, of which two were obsolete and had to be manufactured to special order, the circuit included a single electrometer valve or tube. These were favored as they had been tested on fallout in Australia after the Operation Buffalo nuclear tests , and remained in use until 1982 by commissioning a manufacturer to regularly produce special production runs of the obsolete batteries. Within the ROC the RSM was only used at post sites for three years when it was superseded in 1958 by the FSM and the RSM retained only for post attack mobile monitoring missions. The Fixed Survey Meter or FSM was introduced in 1958. For the first time it was possible to operate the unit from within the Monitoring Post or Group HQ using an external Geiger Muller Probe connected via coaxial cable and mounted to a telescopic rod and protected on the surface by a polycarbonate dome. The FSM used the same obsolete high voltage batteries as the RSM. In 1985 this instrument was replaced by the PDRM82(F) . The PDRM82(F) was the fixed desktop version of the PDRM82. It gave more accurate readings and used standard 'C' cell torch batteries that lasted many times longer, up to 400 hours of operation. The compact and robust instruments were housed in sturdy orange coloured polycarbonate cases and had clear liquid crystal displays. The PDRM82(F) could also be operated from within the Monitoring Post or Group HQ as before, using an external Geiger Muller Probe connected via coaxial cable. The telescopic rod, mounting bracket and polycarbonate dome used by the earlier FSM remained in service and continued to be used with the PDRM82(F). The Radiac Survey Meter No 2 or RSM was a 1955-meter which measured gamma and beta radiation. Having been superseded within the ROC by the Fixed Survey Meter the RSM remained in use only for post attack mobile monitoring missions in a post attack period. Image can be seen in ‘Static measurement of ionising radiation’ section. The Radiac Survey Meter, Lightweight, MkVI , produced by the AVO company (The MkIII and MkIV were also available) were issued to the ROC in the mid-late 1960s, but were not regarded favourable due to using almost obsolete Mallory batteries and were ionisation type meters that measured gamma radiation. The PDRM82 or Portable Dose Rate Meter was the standard portable version of the new meters, that were manufactured by Plessey and introduced during the 1980s, giving more accurate readings and using standard 'C' cell torch batteries that lasted many times longer, up to 400 hours of operation. The compact and robust instruments were housed in sturdy orange coloured polycarbonate cases and had clear liquid crystal displays. The Radiac sensor was self-contained within the casing. The Dosimeter pocket meters were issued to individual observers for measuring their personal levels of radiation absorption during operations. Three different grades of dosimeter were used, depending on ambient radiation levels. The original hand wound and temperamental dosimeter charging units (Charging Unit, Individual, Dosimeter No.1 & No.2) were replaced during the 1980s by a battery operated automatic charging unit (EAL Type N.105A).
https://en.wikipedia.org/wiki/Operational_instruments_of_the_Royal_Observer_Corps
In the evolutionary biology of sexual reproduction , operational sex ratio ( OSR ) is the ratio of sexually competing males that are ready to mate to sexually competing females that are ready to mate, [ 1 ] [ 2 ] [ 3 ] or alternatively the local ratio of fertilizable females to sexually active males at any given time. [ 4 ] This differs from physical sex ratio which simply includes all individuals, including those that are sexually inactive or do not compete for mates. The theory of OSR hypothesizes that the operational sex ratio affects the mating competition of males and females in a population. [ 5 ] This concept is especially useful in the study of sexual selection since it is a measure of how intense sexual competition is in a species, and also in the study of the relationship of sexual selection to sexual dimorphism . [ 6 ] The OSR is closely linked to the "potential rate of reproduction" of the two sexes; [ 1 ] that is, how fast they each could reproduce in ideal circumstances. Usually variation in potential reproductive rates creates bias in the OSR and this in turn will affect the strength of selection. [ 7 ] The OSR is said to be biased toward a particular sex when sexually ready members of that sex are more abundant. For example, a male-biased OSR means that there are more sexually competing males than sexually competing females. The operational sex ratio is affected by the length of time each sex spends in caring for young or in recovering from mating. [ 8 ] For example, if females cease mating activity to care for young, but males do not, then more males would be ready to mate, thus creating a male biased OSR. One aspect of gestation and recovery time would be clutch loss. Clutch loss is when offspring or a group of offspring is lost, due to an accident, predation, etc. This, in turn, affects how long reproductive cycles will be in both males and females. If the males were to invest more time in the care of their offspring, they would be spending less time mating. This pushes the population towards a female biased OSR and vice versa. Whether or not it is the males or females investing more care in their offspring, if they were to lose their offspring for whatever reason, this would then change the OSR to be less biased because the once occupied sex becomes available to mate again. [ 9 ] As aforementioned, another major factor that influences OSR is potential rate of reproduction (PRR). Any sexual differences in the PRR will also change the OSR, so it is important to look at factors that change PRR as well. [ 10 ] [ 11 ] [ 12 ] [ 13 ] These include constraints to environmental factors such as food or nesting sites. For example, if males are required to provide a nutrient high gift before mating (most likely food) then when nutrients available is high, the OSR will be male biased because there is plenty of nutrients available to provide gifts. However, if nutrients is low, less males will be ready to reproduce, causing the population to have a female biased OSR. [ 10 ] [ 14 ] [ 15 ] [ 16 ] Another example would be if, in a certain species, males provided care for offspring and a nest. [ 17 ] If the availability of nesting sites decreased, we would see the population trend towards a more female biased OSR because only a small number of males actually have a nest while all the females, regardless of a nest or not, are still producing eggs. [ 18 ] A major factor that OSR can predict is the opportunity for sexual selection. As the OSR becomes more biased, the sex that is in excess will tend to undergo more competition for mates and therefore undergo strong sexual selection. [ 4 ] [ 8 ] [ 19 ] Intensity of competition is also a factor that can be predicted by OSR. [ 2 ] According to sexual selection theory, whichever sex is more abundant is expected to compete more strongly and the sex that is less abundant is expected to be "choosier" in who they decide to mate with. It would be expected that when an OSR is more biased to one sex than the other, that one would observe more interaction and competition from the sex that is more available to mate. When the population is more female biased, more female-female competition is observed and the opposite is seen for a male population where a male biased would cause more male-male interaction and competitiveness. Though both sexes may be competing for mates, it is important to remember that the biased OSR predicts which sex is the predominant competitor (the sex that exhibits the most competition). [ 10 ] [ 20 ] [ 21 ] OSR can also predict what will happen to mate guarding in a population. As OSR becomes more biased to one sex, it can be observed that mate-guarding will increase. This is likely due to the fact that rival numbers (number of a certain sex that are also ready to mate) are increased. If a population is male biased then there are a lot more rival males to compete for a mate, meaning that those who have a mate already are more likely to guard the mate that they have. [ 22 ]
https://en.wikipedia.org/wiki/Operational_sex_ratio
An operational taxonomic unit ( OTU ) is an operational definition used to classify groups of closely related individuals. The term was originally introduced in 1963 by Robert R. Sokal and Peter H. A. Sneath in the context of numerical taxonomy , where an "operational taxonomic unit" is simply the group of organisms currently being studied. [ 1 ] In this sense, an OTU is a pragmatic definition to group individuals by similarity, equivalent to but not necessarily in line with classical Linnaean taxonomy or modern evolutionary taxonomy . Nowadays, however, the term "OTU" is commonly used in a different context and refers to clusters of (uncultivated or unknown) organisms, grouped by DNA sequence similarity of a specific taxonomic marker gene (originally coined as mOTU; molecular OTU). [ 2 ] In other words, OTUs are pragmatic proxies for " species " (microbial or metazoan ) at different taxonomic levels, in the absence of traditional systems of biological classification as are available for macroscopic organisms. For several years, OTUs have been the most commonly used units of diversity, especially when analysing small subunit 16S (for prokaryotes) or 18S rRNA (for eukaryotes [ 3 ] ) marker gene sequence datasets. Sequences can be clustered according to their similarity to one another, and operational taxonomic units are defined based on the similarity threshold (usually 97% similarity; however also 100% similarity is common, also known as single variants [ 4 ] ) set by the researcher. It remains debatable how well this commonly-used method recapitulates true microbial species phylogeny or ecology. Although OTUs can be calculated differently when using different algorithms or thresholds, research by Schmidt et al. (2014) demonstrated that microbial OTUs were generally ecologically consistent across habitats and several OTU clustering approaches. [ 5 ] The number of OTUs defined may be inflated due to errors in DNA sequencing . [ 6 ] There are three main approaches to clustering OTUs: [ 7 ]
https://en.wikipedia.org/wiki/Operational_taxonomic_unit
Operational technology ( OT ) is hardware and software that detects or causes a change, through the direct monitoring and/or control of industrial equipment, assets , processes, and events . [ 1 ] The term has become established to demonstrate the technological and functional differences between traditional information technology (IT) systems and industrial control systems (ICS) environment, the so-called "IT in the non-carpeted areas". Examples of operational technology include: The term usually describes environments containing industrial control systems (ICS), such as supervisory control and data acquisition (SCADA) systems, distributed control system (DCS), remote terminal units (RTU) and programmable logic controllers (PLC), as well as dedicated networks and organization units. The built environment, whether commercial or domestic, is increasingly controlled and monitored via 10s, 100s, and 1,000s of Internet of Things (IoT) devices - and Industrial Internet of Things (IIoT) . In this application space, these IoT devices are both interconnected via converged technology edge IoT platforms and or via "cloud" based applications. Embedded Systems are also included in the sphere of operational technology (e.g. smart instrumentation), along with a large subset of scientific data acquisition, control, and computing devices. An OT device could be as small as the engine control unit (ECU) of a car or as large as the distributed control network for a national electricity grid. Systems that process operational data (including electronic, telecommunications, computer systems and technical components) are included under the term operational technology. OT systems can be required to control valves, engines, conveyors and other machines to regulate various process values, such as temperature, pressure, flow, and to monitor them to prevent hazardous conditions. OT systems use various technologies for hardware design and communications protocols, that are unknown in IT. Common problems include supporting legacy systems & devices and numerous vendor architectures and standards. Since OT systems often supervise industrial processes, most of the time availability must be sustained. This often means that real time (or near-real time) processing is required, with high rates of reliability and availability. Laboratory systems (heterogenous Instruments with embedded computer systems or often non standardized technical components used in their computer systems) are commonly a borderline case between IT and OT since they mostly clearly don't fit into standard IT scope but also are often not part of OT core definitions. This kind of environment may also be referred to as industrial information technology (IIT). Historical OT networks utilized proprietary protocols optimized for the required functions, some of which have become adopted as 'standard' industrial communications protocols (e.g. DNP3 , Modbus , Profibus , LonWorks , DALI , BACnet , KNX , EnOcean and OPC-UA ). More recently IT-standard network protocols are being implemented in OT devices and systems to reduce complexity and increase compatibility with more traditional IT hardware (e.g. TCP/IP); this however has had a demonstrable reduction in security for OT systems, which in the past have relied on air gaps and the inability to run PC-based malware (see Stuxnet for a well-known example of this change). The term operational technology as applied to industrial control systems was first published in a research paper from Gartner in May 2006 (Steenstrup, Sumic, Spiers, Williams) and presented publicly in September 2006 at the Gartner Energy and Utilities IT Summit. [ 2 ] Initially the term was applied to power utility control systems, but over time was adopted by other industrial sectors and used in combination with IoT . [ 3 ] A principal driver of the adoption of the term was that the nature of operational technology platforms had evolved from bespoke proprietary systems to complex software portfolios that rely on IT infrastructure. This change was termed IT OT convergence. [ 4 ] The concept of aligning and integrating the IT and OT systems of industrial companies gained importance as companies realized that physical assets and infrastructure was both managed by OT systems but also generated data for the IT systems running the business. In May 2009 a paper was presented at the 4th World Congress on Engineering Asset Management Athens, Greece outlining the importance of this in the area of asset management [ 5 ] Industrial technology companies such as GE, Hitachi, Honeywell, Siemens, ABB and Rockwell are the main providers of OT platforms and systems either embedded in equipment or added to them for control, management and monitoring. These industrial technology companies have needed to evolve into software companies rather than being strictly machine providers. This change impacts their business models which are still evolving [ 6 ] From the very beginning security of operational technology has relied almost entirely on the standalone nature of OT installations, security by obscurity. At least since 2005 OT systems have become linked to IT systems with the corporate goal of widening an organization's ability to monitor and adjust its OT systems, which has introduced massive challenges in securing them. [ 7 ] Approaches known from regular IT are usually replaced or redesigned to align with the OT environment. OT has different priorities and a different infrastructure to protect when compared with IT; typically IT systems are designed around 'Confidentiality, Integrity, Availability' (i.e. keep information safe and correct before allowing a user to access it) whereas OT systems require 'realtime control and functionality change flexibility, availability, integrity, confidentiality' to operate effectively (i.e. present the user with information wherever possible and worry about correctness or confidentiality after). Other challenges affecting the security of OT systems include: OT often control and monitor important industrial processes, critical infrastructure, and other physical devices. These networks are vital for the proper functioning of various industries, such as manufacturing, power generation, transportation and our society. Most common vulnerabilities and attack vectors should be addressed, whereof : To protect against these risks, organizations should adopt a proactive, multi-layered security approach, including regular risk assessments, network segmentation, strong authentication, and access controls, as well as continuous monitoring and incident response capabilities. Operational technology is widely used in refineries, power plants, nuclear plants, etc. and as such has become a common, crucial element of critical infrastructure systems. Depending on the country there are increasing legal obligations for Critical Infrastructure operators with regards to the implementation of OT systems. In addition certainly since 2000, 100,000's of buildings have had IoT building management, automation and smart lighting control solutions fitted [ 8 ] These solutions have either no proper security or very inadequate security capabilities either designed in or applied. [ 9 ] This has recently led to bad actors exploiting such solutions' vulnerabilities with ransomware attacks causing system lock outs, operational failures exposing businesses operating in such buildings to the immense risks to health and safety, operations, brand reputation and financial damage [ 10 ] There is a strong focus put on subjects like IT/OT cooperation or IT/OT alignment [ 11 ] in the modern industrial setting. It is crucial for the companies to build close cooperation between IT and OT departments, resulting in increased effectiveness in many areas of OT and IT systems alike (such as change management, incident management and security standards) [ 12 ] [ 13 ] A typical restriction is the refusal to allow OT systems to perform safety functions ( particularly in the nuclear environment), instead relying on hard-wired control systems to perform such functions; this decision stems from the widely recognized issue with substantiating software (e.g. code may perform marginally differently once compiled). The Stuxnet malware is one example of this, highlighting the potential for disaster should a safety system become infected with malware (whether targeted at that system or accidentally infected). Operational technology is utilized in many sectors and environments, such as:
https://en.wikipedia.org/wiki/Operational_technology
Operations and Technology Management (OTM) is an interdisciplinary major which prepares students to gain knowledge and skills in the areas of operations management, IT management, and data analytics. This major is typically offered as part of business school and the curriculum is designed to develop the skills needed to manage and improve business operations through the integrated use of theories and methods from both operations management and information technology management (IT). [ 1 ] Because of its inter-disciplinary nature, students graduating with OTM degrees tend to have more career options across a wide-range of industries. For instances, students with OTM degrees can pursue many roles across Operations, IT, and Analytics fields. [ 2 ] Many universities offer this major. For instance, the University of Portland offers a BBA in OTM and MS in OTM (MSOTM) programs. [ 3 ] [ 4 ] Harvard University offers MBA and DBA in Technology and operations management (TOM). [ 5 ] The University of Wisconsin-Madison offers BBA and MBA programs in OTM. [ 6 ] Cal Poly-Pomona offers programs in Technology and Operations Management (TOM). [ 7 ] UCLA Anderson School of Management of Management offers Decisions, operations and technology management (DTOM) programs. [ 8 ] Boston University offers programs in Operations & Technology Management (OTM). NYU's Stern offers a specialization in management of technology and operations .
https://en.wikipedia.org/wiki/Operations_and_technology_management
Operations engineering is a branch of engineering that is mainly concerned with the analysis and optimization of operational problems using scientific and mathematical methods. [ 1 ] More frequently it has applications in the areas of Broadcasting / Industrial Engineering and also in the Creative and Technology Industries . Operations engineering is considered to be a subdiscipline of Operations Research and Operations Management .
https://en.wikipedia.org/wiki/Operations_engineering
Operations security ( OPSEC ) is a process that identifies critical information to determine whether friendly actions can be observed by enemy intelligence, determines if information obtained by adversaries could be interpreted to be useful to them, and then executes selected measures that eliminate or reduce adversary exploitation of friendly critical information. The term "operations security" was coined by the United States military during the Vietnam War . In 1966, United States Admiral Ulysses Sharp established a multidisciplinary security team to investigate the failure of certain combat operations during the Vietnam War . This operation was dubbed Operation Purple Dragon, and included personnel from the National Security Agency and the Department of Defense . [ 1 ] When the operation concluded, the Purple Dragon team codified their recommendations. They called the process "Operations Security" in order to distinguish the process from existing processes and ensure continued inter-agency support. [ 2 ] In 1988, President Ronald Reagan signed National Security Decision Directive (NSDD) 298. This document established the National Operations Security Program and named the Director of the National Security Agency as the executive agent for inter-agency OPSEC support. This document also established the Interagency OPSEC Support Staff (IOSS). [ 3 ] The private sector has also adopted OPSEC as a defensive measure against competitive intelligence collection efforts. [ 4 ] NIST SP 800-53 defines OPSEC as the "process by which potential adversaries can be denied information about capabilities and intentions by identifying, controlling, and protecting generally unclassified evidence of the planning and execution of sensitive activities." [ 5 ]
https://en.wikipedia.org/wiki/Operations_security
Operations support systems ( OSS ), operational support systems in British usage, or Operation System ( OpS ) in NTT [ 1 ] are computer systems used by telecommunications service providers to manage their networks (e.g., telephone networks). They support management functions such as network inventory , service provisioning , network configuration and fault management . Together with business support systems (BSS), operations support systems support various end-to-end telecommunication services. BSS and OSS have their own data and service responsibilities. The two systems together are often abbreviated OSS/BSS, BSS/OSS or simply B/OSS. The acronym OSS is also used in a singular form to refer to all the Operations Support Systems viewed as a whole system . Different subdivisions of OSS have been proposed by the TM Forum , industrial research labs, or OSS vendors. In general, an OSS covers at least the following five functions: Before about 1970, many OSS activities were performed by manual administrative processes. However, it became obvious that much of this activity could be replaced by computers . In the next 5 years or so, the telephone companies created a number of computer systems (or software applications ) which automated much of this activity. This was one of the driving factors for the development of the Unix operating system and the C programming language . The Bell System purchased their own product line of PDP-11 computers from Digital Equipment Corporation for a variety of OSS applications. OSS systems used in the Bell System include AMATPS , CSOBS, EADAS , Remote Memory Administration System (RMAS), Switching Control Center System (SCCS), Service Evaluation System (SES), Trunks Integrated Record Keeping System (TIRKS), and many more. OSS systems from this era are described in the Bell System Technical Journal , Bell Labs Record , and Telcordia Technologies (now part of Ericsson ) SR-2275. [ 2 ] Many OSS systems were initially not linked to each other and often required manual intervention. For example, consider the case where a customer wants to order a new telephone service. The ordering system would take the customer's details and details of their order, but would not be able to configure the telephone exchange directly—this would be done by a switch management system. Details of the new service would need to be transferred from the order handling system to the switch management system—and this would normally be done by a technician re-keying the details from one screen into another—a process often referred to as "swivel chair integration". This was clearly another source of inefficiency, so the focus for the next few years was on creating automated interfaces between the OSS applications—OSS integration. Cheap and simple OSS integration remains a major goal of most telecom companies. A lot of the work on OSS has been centered on defining its architecture. Put simply, there are four key elements of OSS: During the 1990s, new OSS architecture definitions were done by the ITU Telecommunication Standardization Sector (ITU-T) in its Telecommunications Management Network (TMN) model. This established a 4-layer model of TMN applicable within an OSS: A fifth level is mentioned at times being the elements themselves, though the standards speak of only four levels. This was a basis for later work. Network management was further defined by the ISO using the FCAPS model—Fault, Configuration, Accounting, Performance and Security. This basis was adopted by the ITU-T TMN standards as the Functional model for the technology base of the TMN standards M.3000 – M.3599 series. Although the FCAPS model was originally conceived and is applicable for an IT enterprise network, it was adopted for use in the public networks run by telecommunication service providers adhering to ITU-T TMN standards. A big issue of network and service management is the ability to manage and control the network elements of the access and core networks. Historically, many efforts have been spent in standardization fora (ITU-T, 3GPP) in order to define standard protocol for network management, but with no success and practical results. On the other hand IETF SNMP protocol (Simple Network Management Protocol) has become the de facto standard for internet and telco management, at the EML-NML communication level. From 2000 and beyond, with the growth of the new broadband and VoIP services, the management of home networks is also entering the scope of OSS and network management. DSL Forum TR-069 specification has defined the CPE WAN Management Protocol (CWMP), suitable for managing home networks devices and terminals at the EML-NML interface. The TM Forum , formerly the TeleManagement Forum, is an international membership organization of communications service providers and suppliers to the communications industry. While OSS is generally dominated by proprietary and custom technologies, TM Forum promotes standards and frameworks in OSS and BSS. By 2005, developments in OSS architecture were the results of the TM Forum's New Generation Operations Systems and Software (NGOSS) program, which was established in 2000. This established a set of principles that OSS integration should adopt, along with a set of models that provide standardized approaches. NGOSS was renamed Frameworx. The TM Forum describes Frameworx as an architecture that is: The components interact through a common communications vehicle (using an information exchange infrastructure; e.g., EAI , Web Services , EJB ). The behavior can be controlled through the use of process management and/or policy management to orchestrate the functionality provided by the services offered by the components. The early focus of the TM Forum's NGOSS work was on building reference models to support a business stakeholder view on process, information and application interaction. Running in parallel were activities that supported an implementation stakeholder view on interface specifications to provide access to OSS capability (primarily MTNM). The MTNM work evolved into a set of Web Services providing Multi-Technology Operations System Interfaces MTOSI . Most recently, [ when? ] the OSS through Java initiative (OSS/J) joined the TMF to provide NGOSS-based BSS/OSS APIs . Open Digital Architecture (ODA) offers an industry-agreed blueprint, language and set of key design principles to follow. It will provide pragmatic pathways for the journey from maintaining monolithic, legacy software solutions, towards managing nimble, cloud based capabilities that can be orchestrated using AI . It is a reference architecture that maps TM Forum’s Open APIs against technical and business platform functions. [ 3 ]
https://en.wikipedia.org/wiki/Operations_support_system
In mathematics , an operator is generally a mapping or function that acts on elements of a space to produce elements of another space (possibly and sometimes required to be the same space). There is no general definition of an operator , but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly (for example in the case of an integral operator ), and may be extended so as to act on related objects (an operator that acts on functions may act also on differential equations whose solutions are functions that satisfy the equation). (see Operator (physics) for other examples) The most basic operators are linear maps , which act on vector spaces . Linear operators refer to linear maps whose domain and range are the same space, for example from R n {\displaystyle \mathbb {R} ^{n}} to R n {\displaystyle \mathbb {R} ^{n}} . [ 1 ] [ 2 ] [ a ] Such operators often preserve properties, such as continuity . For example, differentiation and indefinite integration are linear operators; operators that are built from them are called differential operators , integral operators or integro-differential operators. Operator is also used for denoting the symbol of a mathematical operation . This is related with the meaning of "operator" in computer programming (see Operator (computer programming) ). The most common kind of operators encountered are linear operators . Let U and V be vector spaces over some field K . A mapping A : U → V {\displaystyle \operatorname {A} :U\to V} is linear if A ⁡ ( α x + β y ) = α A ⁡ x + β A ⁡ y {\displaystyle \operatorname {A} \left(\alpha \mathbf {x} +\beta \mathbf {y} \right)=\alpha \operatorname {A} \mathbf {x} +\beta \operatorname {A} \mathbf {y} \ } for all x and y in U , and for all α , β in K . This means that a linear operator preserves vector space operations, in the sense that it does not matter whether you apply the linear operator before or after the operations of addition and scalar multiplication. In more technical words, linear operators are morphisms between vector spaces. In the finite-dimensional case linear operators can be represented by matrices in the following way. Let K be a field, and U {\displaystyle U} and V be finite-dimensional vector spaces over K . Let us select a basis u 1 , … , u n {\displaystyle \ \mathbf {u} _{1},\ldots ,\mathbf {u} _{n}} in U and v 1 , … , v m {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{m}} in V . Then let x = x i u i {\displaystyle \mathbf {x} =x^{i}\mathbf {u} _{i}} be an arbitrary vector in U {\displaystyle U} (assuming Einstein convention ), and A : U → V {\displaystyle \operatorname {A} :U\to V} be a linear operator. Then A ⁡ x = x i A ⁡ u i = x i ( A ⁡ u i ) j v j . {\displaystyle \ \operatorname {A} \mathbf {x} =x^{i}\operatorname {A} \mathbf {u} _{i}=x^{i}\left(\operatorname {A} \mathbf {u} _{i}\right)^{j}\mathbf {v} _{j}~.} Then a i j ≡ ( A ⁡ u i ) j {\displaystyle a_{i}^{j}\equiv \left(\operatorname {A} \mathbf {u} _{i}\right)^{j}} , with all a i j ∈ K {\displaystyle a_{i}^{j}\in K} , is the matrix form of the operator A {\displaystyle \operatorname {A} } in the fixed basis { u i } i = 1 n {\displaystyle \{\mathbf {u} _{i}\}_{i=1}^{n}} . The tensor a i j {\displaystyle a_{i}^{j}} does not depend on the choice of x {\displaystyle x} , and A ⁡ x = y {\displaystyle \operatorname {A} \mathbf {x} =\mathbf {y} } if a i j x i = y j {\displaystyle a_{i}^{j}x^{i}=y^{j}} . Thus in fixed bases n -by- m matrices are in bijective correspondence to linear operators from U {\displaystyle U} to V {\displaystyle V} . The important concepts directly related to operators between finite-dimensional vector spaces are the ones of rank , determinant , inverse operator , and eigenspace . Linear operators also play a great role in the infinite-dimensional case. The concepts of rank and determinant cannot be extended to infinite-dimensional matrices. This is why very different techniques are employed when studying linear operators (and operators in general) in the infinite-dimensional case. The study of linear operators in the infinite-dimensional case is known as functional analysis (so called because various classes of functions form interesting examples of infinite-dimensional vector spaces). The space of sequences of real numbers, or more generally sequences of vectors in any vector space, themselves form an infinite-dimensional vector space. The most important cases are sequences of real or complex numbers, and these spaces, together with linear subspaces, are known as sequence spaces . Operators on these spaces are known as sequence transformations . Bounded linear operators over a Banach space form a Banach algebra in respect to the standard operator norm. The theory of Banach algebras develops a very general concept of spectra that elegantly generalizes the theory of eigenspaces. Let U and V be two vector spaces over the same ordered field (for example; R {\displaystyle \mathbb {R} } ), and they are equipped with norms . Then a linear operator from U to V is called bounded if there exists c > 0 such that ‖ A ⁡ x ‖ V ≤ c ‖ x ‖ U {\displaystyle \|\operatorname {A} \mathbf {x} \|_{V}\leq c\ \|\mathbf {x} \|_{U}} for every x in U . Bounded operators form a vector space. On this vector space we can introduce a norm that is compatible with the norms of U and V : ‖ A ⁡ ‖ = inf { c : ‖ A ⁡ x ‖ V ≤ c ‖ x ‖ U } . {\displaystyle \|\operatorname {A} \|=\inf\{\ c:\|\operatorname {A} \mathbf {x} \|_{V}\leq c\ \|\mathbf {x} \|_{U}\}.} In case of operators from U to itself it can be shown that Any unital normed algebra with this property is called a Banach algebra . It is possible to generalize spectral theory to such algebras. C*-algebras , which are Banach algebras with some additional structure, play an important role in quantum mechanics . From the point of view of functional analysis , calculus is the study of two linear operators: the differential operator d d t {\displaystyle {\frac {\ \mathrm {d} \ }{\mathrm {d} t}}} , and the Volterra operator ∫ 0 t {\displaystyle \int _{0}^{t}} . Three operators are key to vector calculus : As an extension of vector calculus operators to physics, engineering and tensor spaces, grad, div and curl operators also are often associated with tensor calculus as well as vector calculus. [ 3 ] In geometry , additional structures on vector spaces are sometimes studied. Operators that map such vector spaces to themselves bijectively are very useful in these studies, they naturally form groups by composition. For example, bijective operators preserving the structure of a vector space are precisely the invertible linear operators . They form the general linear group under composition. However, they do not form a vector space under operator addition; since, for example, both the identity and −identity are invertible (bijective), but their sum, 0, is not. Operators preserving the Euclidean metric on such a space form the isometry group , and those that fix the origin form a subgroup known as the orthogonal group . Operators in the orthogonal group that also preserve the orientation of vector tuples form the special orthogonal group , or the group of rotations. Operators are also involved in probability theory, such as expectation , variance , and covariance , which are used to name both number statistics and the operators which produce them. Indeed, every covariance is basically a dot product : Every variance is a dot product of a vector with itself, and thus is a quadratic norm ; every standard deviation is a norm (square root of the quadratic norm); the corresponding cosine to this dot product is the Pearson correlation coefficient ; expected value is basically an integral operator (used to measure weighted shapes in the space). The Fourier transform is useful in applied mathematics, particularly physics and signal processing. It is another integral operator; it is useful mainly because it converts a function on one (temporal) domain to a function on another (frequency) domain, in a way effectively invertible . No information is lost, as there is an inverse transform operator. In the simple case of periodic functions , this result is based on the theorem that any continuous periodic function can be represented as the sum of a series of sine waves and cosine waves: f ( t ) = a 0 2 + ∑ n = 1 ∞ a n cos ⁡ ( ω n t ) + b n sin ⁡ ( ω n t ) {\displaystyle f(t)={\frac {\ a_{0}\ }{2}}+\sum _{n=1}^{\infty }\ a_{n}\cos(\omega \ n\ t)+b_{n}\sin(\omega \ n\ t)} The tuple ( a 0 , a 1 , b 1 , a 2 , b 2 , ... ) is in fact an element of an infinite-dimensional vector space ℓ 2 , and thus Fourier series is a linear operator. When dealing with general function R → C {\displaystyle \mathbb {R} \to \mathbb {C} } , the transform takes on an integral form: The Laplace transform is another integral operator and is involved in simplifying the process of solving differential equations. Given f = f ( s ) , it is defined by: F ( s ) = L ⁡ { f } ( s ) = ∫ 0 ∞ e − s t f ( t ) d t {\displaystyle F(s)=\operatorname {\mathcal {L}} \{f\}(s)=\int _{0}^{\infty }e^{-s\ t}\ f(t)\ \mathrm {d} \ t}
https://en.wikipedia.org/wiki/Operator_(mathematics)
An operator is a function over a space of physical states onto another space of states. The simplest example of the utility of operators is the study of symmetry (which makes the concept of a group useful in this context). Because of this, they are useful tools in classical mechanics . Operators are even more important in quantum mechanics , where they form an intrinsic part of the formulation of the theory. They play a central role in describing observables (measurable quantities like energy, momentum, etc.). In classical mechanics, the movement of a particle (or system of particles) is completely determined by the Lagrangian L ( q , q ˙ , t ) {\displaystyle L(q,{\dot {q}},t)} or equivalently the Hamiltonian H ( q , p , t ) {\displaystyle H(q,p,t)} , a function of the generalized coordinates q , generalized velocities q ˙ = d q / d t {\displaystyle {\dot {q}}=\mathrm {d} q/\mathrm {d} t} and its conjugate momenta : If either L or H is independent of a generalized coordinate q , meaning the L and H do not change when q is changed, which in turn means the dynamics of the particle are still the same even when q changes, the corresponding momenta conjugate to those coordinates will be conserved (this is part of Noether's theorem , and the invariance of motion with respect to the coordinate q is a symmetry ). Operators in classical mechanics are related to these symmetries. More technically, when H is invariant under the action of a certain group of transformations G : The elements of G are physical operators, which map physical states among themselves. where R ( n ^ , θ ) {\displaystyle R({\hat {\boldsymbol {n}}},\theta )} is the rotation matrix about an axis defined by the unit vector n ^ {\displaystyle {\hat {\boldsymbol {n}}}} and angle θ . If the transformation is infinitesimal , the operator action should be of the form where I {\displaystyle I} is the identity operator, ϵ {\displaystyle \epsilon } is a parameter with a small value, and A {\displaystyle A} will depend on the transformation at hand, and is called a generator of the group . Again, as a simple example, we will derive the generator of the space translations on 1D functions. As it was stated, T a f ( x ) = f ( x − a ) {\displaystyle T_{a}f(x)=f(x-a)} . If a = ϵ {\displaystyle a=\epsilon } is infinitesimal, then we may write This formula may be rewritten as where D {\displaystyle D} is the generator of the translation group, which in this case happens to be the derivative operator. Thus, it is said that the generator of translations is the derivative. The whole group may be recovered, under normal circumstances, from the generators, via the exponential map . In the case of the translations the idea works like this. The translation for a finite value of a {\displaystyle a} may be obtained by repeated application of the infinitesimal translation: with the ⋯ {\displaystyle \cdots } standing for the application N {\displaystyle N} times. If N {\displaystyle N} is large, each of the factors may be considered to be infinitesimal: But this limit may be rewritten as an exponential: To be convinced of the validity of this formal expression, we may expand the exponential in a power series : The right-hand side may be rewritten as which is just the Taylor expansion of f ( x − a ) {\displaystyle f(x-a)} , which was our original value for T a f ( x ) {\displaystyle T_{a}f(x)} . The mathematical properties of physical operators are a topic of great importance in itself. For further information, see C*-algebra and Gelfand–Naimark theorem . The mathematical formulation of quantum mechanics (QM) is built upon the concept of an operator. Physical pure states in quantum mechanics are represented as unit-norm vectors (probabilities are normalized to one) in a special complex Hilbert space . Time evolution in this vector space is given by the application of the evolution operator . Any observable , i.e., any quantity which can be measured in a physical experiment, should be associated with a self-adjoint linear operator . The operators must yield real eigenvalues , since they are values which may come up as the result of the experiment. Mathematically this means the operators must be Hermitian . [ 1 ] The probability of each eigenvalue is related to the projection of the physical state on the subspace related to that eigenvalue. See below for mathematical details about Hermitian operators. In the wave mechanics formulation of QM, the wavefunction varies with space and time, or equivalently momentum and time (see position and momentum space for details), so observables are differential operators . In the matrix mechanics formulation, the norm of the physical state should stay fixed, so the evolution operator should be unitary , and the operators can be represented as matrices. Any other symmetry, mapping a physical state into another, should keep this restriction. The wavefunction must be square-integrable (see L p spaces ), meaning: and normalizable, so that: Two cases of eigenstates (and eigenvalues) are: Let ψ be the wavefunction for a quantum system, and A ^ {\displaystyle {\hat {A}}} be any linear operator for some observable A (such as position, momentum, energy, angular momentum etc.). If ψ is an eigenfunction of the operator A ^ {\displaystyle {\hat {A}}} , then where a is the eigenvalue of the operator, corresponding to the measured value of the observable, i.e. observable A has a measured value a . If ψ is an eigenfunction of a given operator A ^ {\displaystyle {\hat {A}}} , then a definite quantity (the eigenvalue a ) will be observed if a measurement of the observable A is made on the state ψ . Conversely, if ψ is not an eigenfunction of A ^ {\displaystyle {\hat {A}}} , then it has no eigenvalue for A ^ {\displaystyle {\hat {A}}} , and the observable does not have a single definite value in that case. Instead, measurements of the observable A will yield each eigenvalue with a certain probability (related to the decomposition of ψ relative to the orthonormal eigenbasis of A ^ {\displaystyle {\hat {A}}} ). In bra–ket notation the above can be written; that are equal if | ψ ⟩ {\displaystyle \left|\psi \right\rangle } is an eigenvector , or eigenket of the observable A . Due to linearity, vectors can be defined in any number of dimensions, as each component of the vector acts on the function separately. One mathematical example is the del operator , which is itself a vector (useful in momentum-related quantum operators, in the table below). An operator in n -dimensional space can be written: where e j are basis vectors corresponding to each component operator A j . Each component will yield a corresponding eigenvalue a j {\displaystyle a_{j}} . Acting this on the wave function ψ : in which we have used A ^ j ψ = a j ψ . {\displaystyle {\hat {A}}_{j}\psi =a_{j}\psi .} In bra–ket notation: If two observables A and B have linear operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} , the commutator is defined by, The commutator is itself a (composite) operator. Acting the commutator on ψ gives: If ψ is an eigenfunction with eigenvalues a and b for observables A and B respectively, and if the operators commute: then the observables A and B can be measured simultaneously with infinite precision, i.e., uncertainties Δ A = 0 {\displaystyle \Delta A=0} , Δ B = 0 {\displaystyle \Delta B=0} simultaneously. ψ is then said to be the simultaneous eigenfunction of A and B. To illustrate this: It shows that measurement of A and B does not cause any shift of state, i.e., initial and final states are same (no disturbance due to measurement). Suppose we measure A to get value a. We then measure B to get the value b. We measure A again. We still get the same value a. Clearly the state ( ψ ) of the system is not destroyed and so we are able to measure A and B simultaneously with infinite precision. If the operators do not commute: they cannot be prepared simultaneously to arbitrary precision, and there is an uncertainty relation between the observables even if ψ is an eigenfunction the above relation holds. Notable pairs are position-and-momentum and energy-and-time uncertainty relations, and the angular momenta (spin, orbital and total) about any two orthogonal axes (such as L x and L y , or s y and s z , etc.). [ 2 ] The expectation value (equivalently the average or mean value) is the average measurement of an observable, for particle in region R . The expectation value ⟨ A ^ ⟩ {\displaystyle \left\langle {\hat {A}}\right\rangle } of the operator A ^ {\displaystyle {\hat {A}}} is calculated from: [ 3 ] This can be generalized to any function F of an operator: An example of F is the 2-fold action of A on ψ , i.e. squaring an operator or doing it twice: The definition of a Hermitian operator is: [ 1 ] Following from this, in bra–ket notation: Important properties of Hermitian operators include: An operator can be written in matrix form to map one basis vector to another. Since the operators are linear, the matrix is a linear transformation (aka transition matrix) between bases. Each basis element ϕ j {\displaystyle \phi _{j}} can be connected to another, [ 3 ] by the expression: which is a matrix element: A further property of a Hermitian operator is that eigenfunctions corresponding to different eigenvalues are orthogonal. [ 1 ] In matrix form, operators allow real eigenvalues to be found, corresponding to measurements. Orthogonality allows a suitable basis set of vectors to represent the state of the quantum system. The eigenvalues of the operator are also evaluated in the same way as for the square matrix, by solving the characteristic polynomial : where I is the n × n identity matrix , as an operator it corresponds to the identity operator. For a discrete basis: while for a continuous basis: A non-singular operator A ^ {\displaystyle {\hat {A}}} has an inverse A ^ − 1 {\displaystyle {\hat {A}}^{-1}} defined by: If an operator has no inverse, it is a singular operator. In a finite-dimensional space, an operator is non-singular if and only if its determinant is nonzero: and hence the determinant is zero for a singular operator. The operators used in quantum mechanics are collected in the table below (see for example [ 1 ] [ 4 ] ). The bold-face vectors with circumflexes are not unit vectors , they are 3-vector operators; all three spatial components taken together. p ^ x = − i ℏ ∂ ∂ x , p ^ y = − i ℏ ∂ ∂ y , p ^ z = − i ℏ ∂ ∂ z {\displaystyle {\begin{aligned}{\hat {p}}_{x}&=-i\hbar {\frac {\partial }{\partial x}},&{\hat {p}}_{y}&=-i\hbar {\frac {\partial }{\partial y}},&{\hat {p}}_{z}&=-i\hbar {\frac {\partial }{\partial z}}\end{aligned}}} p ^ = − i ℏ ∇ {\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla \,\!} p ^ x = − i ℏ ∂ ∂ x − q A x p ^ y = − i ℏ ∂ ∂ y − q A y p ^ z = − i ℏ ∂ ∂ z − q A z {\displaystyle {\begin{aligned}{\hat {p}}_{x}=-i\hbar {\frac {\partial }{\partial x}}-qA_{x}\\{\hat {p}}_{y}=-i\hbar {\frac {\partial }{\partial y}}-qA_{y}\\{\hat {p}}_{z}=-i\hbar {\frac {\partial }{\partial z}}-qA_{z}\end{aligned}}} p ^ = P ^ − q A = − i ℏ ∇ − q A {\displaystyle {\begin{aligned}\mathbf {\hat {p}} &=\mathbf {\hat {P}} -q\mathbf {A} \\&=-i\hbar \nabla -q\mathbf {A} \\\end{aligned}}\,\!} T ^ x = − ℏ 2 2 m ∂ 2 ∂ x 2 T ^ y = − ℏ 2 2 m ∂ 2 ∂ y 2 T ^ z = − ℏ 2 2 m ∂ 2 ∂ z 2 {\displaystyle {\begin{aligned}{\hat {T}}_{x}&=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}\\[2pt]{\hat {T}}_{y}&=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial y^{2}}}\\[2pt]{\hat {T}}_{z}&=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial z^{2}}}\\\end{aligned}}} T ^ = 1 2 m p ^ ⋅ p ^ = 1 2 m ( − i ℏ ∇ ) ⋅ ( − i ℏ ∇ ) = − ℏ 2 2 m ∇ 2 {\displaystyle {\begin{aligned}{\hat {T}}&={\frac {1}{2m}}\mathbf {\hat {p}} \cdot \mathbf {\hat {p}} \\&={\frac {1}{2m}}(-i\hbar \nabla )\cdot (-i\hbar \nabla )\\&={\frac {-\hbar ^{2}}{2m}}\nabla ^{2}\end{aligned}}\,\!} T ^ x = 1 2 m ( − i ℏ ∂ ∂ x − q A x ) 2 T ^ y = 1 2 m ( − i ℏ ∂ ∂ y − q A y ) 2 T ^ z = 1 2 m ( − i ℏ ∂ ∂ z − q A z ) 2 {\displaystyle {\begin{aligned}{\hat {T}}_{x}&={\frac {1}{2m}}\left(-i\hbar {\frac {\partial }{\partial x}}-qA_{x}\right)^{2}\\{\hat {T}}_{y}&={\frac {1}{2m}}\left(-i\hbar {\frac {\partial }{\partial y}}-qA_{y}\right)^{2}\\{\hat {T}}_{z}&={\frac {1}{2m}}\left(-i\hbar {\frac {\partial }{\partial z}}-qA_{z}\right)^{2}\end{aligned}}\,\!} T ^ = 1 2 m p ^ ⋅ p ^ = 1 2 m ( − i ℏ ∇ − q A ) ⋅ ( − i ℏ ∇ − q A ) = 1 2 m ( − i ℏ ∇ − q A ) 2 {\displaystyle {\begin{aligned}{\hat {T}}&={\frac {1}{2m}}\mathbf {\hat {p}} \cdot \mathbf {\hat {p}} \\&={\frac {1}{2m}}(-i\hbar \nabla -q\mathbf {A} )\cdot (-i\hbar \nabla -q\mathbf {A} )\\&={\frac {1}{2m}}(-i\hbar \nabla -q\mathbf {A} )^{2}\end{aligned}}\,\!} T ^ x x = J ^ x 2 2 I x x T ^ y y = J ^ y 2 2 I y y T ^ z z = J ^ z 2 2 I z z {\displaystyle {\begin{aligned}{\hat {T}}_{xx}&={\frac {{\hat {J}}_{x}^{2}}{2I_{xx}}}\\{\hat {T}}_{yy}&={\frac {{\hat {J}}_{y}^{2}}{2I_{yy}}}\\{\hat {T}}_{zz}&={\frac {{\hat {J}}_{z}^{2}}{2I_{zz}}}\\\end{aligned}}\,\!} T ^ = J ^ ⋅ J ^ 2 I {\displaystyle {\hat {T}}={\frac {\mathbf {\hat {J}} \cdot \mathbf {\hat {J}} }{2I}}\,\!} [ citation needed ] E ^ = i ℏ ∂ ∂ t {\displaystyle {\hat {E}}=i\hbar {\frac {\partial }{\partial t}}\,\!} Time-independent: E ^ = E {\displaystyle {\hat {E}}=E\,\!} where σ x = ( 0 1 1 0 ) σ y = ( 0 − i i 0 ) σ z = ( 1 0 0 − 1 ) {\displaystyle {\begin{aligned}\sigma _{x}&={\begin{pmatrix}0&1\\1&0\end{pmatrix}}\\\sigma _{y}&={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}\\\sigma _{z}&={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\end{aligned}}} are the Pauli matrices for spin-1/2 particles. where σ is the vector whose components are the Pauli matrices. The procedure for extracting information from a wave function is as follows. Consider the momentum p of a particle as an example. The momentum operator in position basis in one dimension is: Letting this act on ψ we obtain: if ψ is an eigenfunction of p ^ {\displaystyle {\hat {p}}} , then the momentum eigenvalue p is the value of the particle's momentum, found by: For three dimensions the momentum operator uses the nabla operator to become: In Cartesian coordinates (using the standard Cartesian basis vectors e x , e y , e z ) this can be written; that is: The process of finding eigenvalues is the same. Since this is a vector and operator equation, if ψ is an eigenfunction, then each component of the momentum operator will have an eigenvalue corresponding to that component of momentum. Acting p ^ {\displaystyle \mathbf {\hat {p}} } on ψ obtains:
https://en.wikipedia.org/wiki/Operator_(physics)
Operator Toll Dialing was a telephone call routing and toll-switching system for the Bell System and the independent telephone companies in the United States and Canada that paved the way for Direct Distance Dialing (DDD) by telephone service subscribers. Operator Toll Dialing was developed in the early 1940s for regional service and expanded nationwide by the end of the decade. It automated the switching and billing of Long-distance calling , [ 1 ] and drastically reduced the time callers had to wait for connection completion. The concept and technology of Operator Toll Dialing evolved from the General Toll Switching Plan of 1929, and gained technical merits by the cutover of a new type of crossbar switching system (No. 4XB) in Philadelphia to commercial service in August 1943. [ 2 ] Operator Toll Dialing was the first system of its kind for automated forwarding of calls between toll switching centers. In Pennsylvania, it had served customers only for regional toll traffic. It established initial experience with automatic toll switching for the design of a nationwide effort that was sometimes referred to as Nationwide Operator Toll Dialing . By the time of the first promotions of Nationwide Operator Toll Dialing to the general telecommunication industry in 1945, approximately 5% of the 2.7 million toll board calls per day were handled by the early incarnations of this system. [ 3 ] Operator Toll Dialing eliminated the need for intermediate operators to send toll calls to distant central offices. It also eliminated the inward operators for call completion to the local wire line. The technology involved stepwise routing of telephone calls from one toll center to another one that was logically closer to the destination. At each intermediate step the switch set up a circuit to a toll-center that was able to route the call one step closer to the call's destination and automatically directed the traffic around congested or failed routes via backup paths. An essential component for Operator Toll Dialing was the concept of destination routing , in which every toll center in the entire network uses a single, uniform code for each destination, rather than circuit identifiers unique to each regional switch. This required a uniform telephone numbering plan for all regional telephone networks across the continent. By 1947, a newly devised nationwide numbering plan established a geographic partitioning of the continent into numbering plan areas (NPAs), and designated the original North American area codes . An area code is a unique three-digit code serving as a destination routing code to a specific numbering plan area (NPA). This code was the same for all switching systems nationwide, and eliminated the need to publish specific trunk codes for each toll office to various destinations. The translation from NPA code to trunk codes was performed at each toll center without the need for any operator elsewhere to know the details. When automatic apparatus was installed for machine translation of the universal area codes to location-specific trunk codes, it freed operators from looking up trunk codes in directories to send the call one toll office closer to the destination telephone. The geographic layout of numbering plan areas across the North American continent was chosen primarily according to national, state, and territorial boundaries in the United States and Canada. [ 1 ] Some states or provinces needed to be divided into multiple areas. NPAs were created in accordance with principles deemed to maximize customer understanding and minimize dialing effort, while reducing plant cost. [ 4 ] Within each NPA, central offices also received three-digit codes, unique only withing the numbering plan area, so that each central office could be reached by a six-digit dialing prefix ( NPA-XXX ). Each central office had a maximum capacity of 10,000 telephone lines to the locations of end users in the exchange area. Thus, each telephone had a four-digit line number (0000–9999). Therefore, each telephone on the continent was uniquely identified by a telephone number of ten digits (area code–central office code–line number). By the end of 1948, AT&T commenced the wider use of the system with the cutover of new crossbar switching systems for toll-dialing in New York and Chicago, [ 5 ] which resulted in the handling of about ten percent of all Bell System long-distance calling by Operator Toll Dialing. [ 6 ] Altogether, the toll networks enabled operators to place calls directly to distant telephones in some three hundred cities. [ 6 ] On average, it took about two minutes for a long-distance call to be completed to its destination. As foreseen and stated in 1949, the target goal for call completion, after full implementation of the system across the nation, was one minute. For entering the destination codes and telephone numbers into newly designed machine-switching equipment, long-distance operators did not use a slow rotary dials, but a ten-button key set, operating at least twice as fast, which transmitted tone pulses ( multi-frequency signaling ) over regular voice channels to the remote switching centers. [ 6 ] Such channels were incapable of transmitting the direct-current pulses of a rotary dial. Operator Toll Dialing was gradually supplemented and superseded by Direct Distance Dialing (DDD) in the decades following. With DDD, customers themselves dialed an area code followed by a seven-digit telephone number to initiate long-distance calls without operator assistance. Activated first in 1951 for about ten thousand customers in Englewood, New Jersey, DDD was available in the major cities by the early 1960s, but was not fully implemented until the 1970s.
https://en.wikipedia.org/wiki/Operator_Toll_Dialing
Operator grammar is a mathematical theory of human language that explains how language carries information . This theory is the culmination of the life work of Zellig Harris , with major publications toward the end of the last century. Operator grammar proposes that each human language is a self-organizing system in which both the syntactic and semantic properties of a word are established purely in relation to other words. Thus, no external system ( metalanguage ) is required to define the rules of a language. Instead, these rules are learned through exposure to usage and through participation, as is the case with most social behavior . The theory is consistent with the idea that language evolved gradually, with each successive generation introducing new complexity and variation. Operator grammar posits three universal constraints: dependency (certain words depend on the presence of other words to form an utterance), likelihood (some combinations of words and their dependents are more likely than others) and reduction (words in high likelihood combinations can be reduced to shorter forms, and sometimes omitted completely). Together these provide a theory of language information : dependency builds a predicate–argument structure ; likelihood creates distinct meanings; reduction allows compact forms for communication. The fundamental mechanism of operator grammar is the dependency constraint: certain words ( operators ) require that one or more words (arguments) be present in an utterance. In the sentence John wears boots , the operator wears requires the presence of two arguments, such as John and boots . (This definition of dependency differs from other dependency grammars in which the arguments are said to depend on the operators.) In each language the dependency relation among words gives rise to syntactic categories in which the allowable arguments of an operator are defined in terms of their dependency requirements. Class N contains the words that do not require the presence of other words (e.g. John ). Class O N contains the words that require exactly one word of type N (e.g. stumble ). Class O O contains the words that require exactly one word of type O (e.g. handsome ). Class O NN contains the words that require two words of type N (e.g. wear ). Class O OO contains the words that require two words of type O (e.g. because ), as in John stumbles because John wears boots . Other classes include O ON (e.g. with ), O NO (e.g. say ), O NNN (e.g. put ), and O NNO (e.g. ask ). The categories in operator grammar are universal and are defined purely in terms of how words relate to other words, and do not rely on an external set of categories such as noun, verb, adjective, adverb, preposition, conjunction, etc. The dependency properties of each word are observable through usage and therefore learnable. The dependency constraint creates a structure (syntax) in which any word of the appropriate class can be an argument for a given operator. The likelihood constraint places additional restrictions on this structure by making some operator/argument combinations more likely than others. Thus, John wears hats is more likely than John wears snow which in turn is more likely than John wears vacation . The likelihood constraint creates meaning (semantics) by defining each word in terms of the words it can take as arguments, or of which it can be an argument. Each word has a unique set of words with which it has been observed to occur called its selection . The coherent selection of a word is the set of words for which the dependency relation has above average likelihood. Words that are similar in meaning have similar coherent selection. This approach to meaning is self-organizing in that no external system is necessary to define what words mean. Instead, the meaning of the word is determined by its usage within a population of speakers. Patterns of frequent use are observable and therefore learnable. New words can be introduced at any time and defined through usage. In this sense, link grammar could be viewed as a kind of operator grammar, in that the linkage of words is determined entirely by their context, and that each selection is assigned a log-likelihood. The reduction constraint acts on high likelihood combinations of operators and arguments and makes more compact forms. Certain reductions allow words to be omitted completely from an utterance. For example, I expect John to come is reducible to I expect John , because to come is highly likely under expect . The sentence John wears boots and John wears hats can be reduced to John wears boots and hats because repetition of the first argument John under the operator and is highly likely. John reads things can be reduced to John reads , because the argument things has high likelihood of occurring under any operator. Certain reductions reduce words to shorter forms, creating pronouns, suffixes and prefixes ( morphology ). John wears boots and John wears hats can be reduced to John wears boots and he wears hats , where the pronoun he is a reduced form of John . Suffixes and prefixes can be obtained by appending other freely occurring words, or variants of these. John is able to be liked can be reduced to John is likeable . John is thoughtful is reduced from John is full of thought , and John is anti-war from John is against war . Modifiers are the result of several of these kinds of reductions, which give rise to adjectives, adverbs, prepositional phrases , subordinate clauses , etc. Each language has a unique set of reductions. For example, some languages have morphology and some don’t; some transpose short modifiers and some do not. Each word in a language participates only in certain kinds of reductions. However, in each case, the reduced material can be reconstructed from knowledge of what is likely in the given operator/argument combination. The reductions in which each word participates are observable and therefore learnable, just as one learns a word’s dependency and likelihood properties. The importance of reductions in operator grammar is that they separate sentences that contain reduced forms from those that don’t (base sentences). All reductions are paraphrases , since they do not remove any information, just make sentences more compact. Thus, the base sentences contain all the information of the language and the reduced sentences are variants of these. Base sentences are made up of simple words without modifiers and largely without affixes, e.g. snow falls , sheep eat grass , John knows sheep eat grass , that sheep eat snow surprises John . Each operator in a sentence makes a contribution in information according to its likelihood of occurrence with its arguments. Highly expected combinations have low information; rare combinations have high information. The precise contribution of an operator is determined by its selection, the set of words with which it occurs with high frequency. The arguments boots , hats , sheep , grass and snow differ in meaning according to the operators for which they can appear with high likelihood in first or second argument position. For example, snow is expected as first argument of fall but not of eat , while the reverse is true of sheep . Similarly, the operators eat , devour , chew and swallow differ in meaning to the extent that the arguments they select and the operators that select them differ. Operator grammar predicts that the information carried by a sentence is the accumulation of contributions of each argument and operator. The increment of information that a given word adds to a new sentence is determined by how it was used before. In turn, new usages stretch or even alter the information content associated with a word. Because this process is based on high frequency usage, the meanings of words are relatively stable over time, but can change in accordance with the needs of a linguistic community.
https://en.wikipedia.org/wiki/Operator_grammar